Automated lung parenchymal classification usually relies on supervised learning of expert chosen regions representative of the visually differentiable HRCT patterns specific to different pathologies (eg. emphysema, ground glass, honey combing, reticular and normal). Considering the elusiveness of a single most discriminating similarity measure, a plurality of weak learners can be combined to improve the machine learnability. Though a number of quantitative combination strategies exist, their efficacy is data and domain dependent. In this paper, we investigate multiple (N=12) quantitative consensus approaches to combine the clusters obtained with multiple (n=33) probability density-based similarity measures. Our study shows that hypergraph based meta-clustering and probabilistic clustering provides optimal expert-metric agreement.
Radiologists are adept at recognizing the appearance of lung parenchymal abnormalities in CT scans. However,
the inconsistent differential diagnosis, due to subjective aggregation, mandates supervised classification. Towards
optimizing Emphysema classification, we introduce a physician-in-the-loop feedback approach in order to minimize
uncertainty in the selected training samples. Using multi-view inductive learning with the training samples,
an ensemble of Support Vector Machine (SVM) models, each based on a specific pair-wise dissimilarity metric,
was constructed in less than six seconds. In the active relearning phase, the ensemble-expert label conflicts were
resolved by an expert. This just-in-time feedback with unoptimized SVMs yielded 15% increase in classification
accuracy and 25% reduction in the number of support vectors. The generality of relearning was assessed in the
optimized parameter space of six different classifiers across seven dissimilarity metrics. The resultant average
accuracy improved to 21%. The co-operative feedback method proposed here could enhance both diagnostic and
staging throughput efficiency in chest radiology practice.
Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from
CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for
automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines
their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of
probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns:
5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are
refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble
technique. The super clusters were validated against the consensus agreement of four clinical experts. The
aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the
proposed workflow could make automation of lung tissue classification a clinical reality.
Denoising is a critical preconditioning step for quantitative analysis of medical images. Despite promises for more
consistent diagnosis, denoising techniques are seldom explored in clinical settings. While this may be attributed
to the esoteric nature of the parameter sensitve algorithms, lack of quantitative measures on their ecacy to
enhance the clinical decision making is a primary cause of physician apathy. This paper addresses this issue
by exploring the eect of denoising on the integrity of supervised lung parenchymal clusters. Multiple Volumes
of Interests (VOIs) were selected across multiple high resolution CT scans to represent samples of dierent
patterns (normal, emphysema, ground glass, honey combing and reticular). The VOIs were labeled through
consensus of four radiologists. The original datasets were ltered by multiple denoising techniques (median
ltering, anisotropic diusion, bilateral ltering and non-local means) and the corresponding ltered VOIs were
extracted. Plurality of cluster indices based on multiple histogram-based pair-wise similarity measures were used
to assess the quality of supervised clusters in the original and ltered space. The resultant rank orders were
analyzed using the Borda criteria to nd the denoising-similarity measure combination that has the best cluster
quality. Our exhaustive analyis reveals (a) for a number of similarity measures, the cluster quality is inferior
in the ltered space; and (b) for measures that benet from denoising, a simple median ltering outperforms
non-local means and bilateral ltering. Our study suggests the need to judiciously choose, if required, a denoising
technique that does not deteriorate the integrity of supervised clusters.
KEYWORDS: Distortion, Image quality, Image processing, Visualization, Image compression, Magnetic resonance imaging, Medical imaging, Data processing, Signal to noise ratio, Binary data
To accomplish a given computational task, a number of algorithmic and heuristic approaches can be employed to act upon the ever-varying input data. Depending upon the assumptions made regarding the data, the algorithm and the task, the end result from each of these approaches could be different. Currently, there does not exist an automatic, robust, precise, simple, and algorithm-independent measure to rate the accuracy of a multiplicity of algorithms to accomplish a given task on the given data. Lack of such a measure severely restricts the integration of "datacentric" computational tools. This paper proposes a Fourier-domain based method to robustly assess and rank the accuracy of a multiplicity of abstractions vis-a-vis the original data. The method is scalable across dimensions and data types and is blind to the task associated with the generation of the competing to-be-rated abstractions
Despite the tremendous improvements in MR imaging, the optimization of fast imaging techniques for display of T2 contrast has not yet been accomplished. Existing methods that make use of prior-information (feature-recognizing MRI, constrained reconstruction techniques, dynamic imaging etc) are sub-optimal. In this paper, we present a fast, robust method to enhance the T2 related contrast in an MRI image acquired at, but not restricted to, "just-enough-to-highlight-T2" repetition time so as to produce a computed mosaic of the same image at different repetition (TR) and echo (TE) times. This leads to substantial reduction in scan time and simultaneous provision of multiple snapshots of the same image at different TR and TE time settings. The enhanced mapping is performed using a feature-guided, non-linear equalization technique based on prior knowledge. The proposed methodology could be synergistically cascaded with other fast imaging techniques to further improve the acquisition rate. The clinical applications of the proposed contrast enhancement technique include: a pre-scan application in which projected images assist in prescribing a subsequent image acquisition; a real-time application in which images are acquired quickly with a short TR from which projected images at long TR are produced in near real-time; and post processing applications where enhanced images are produce to assist in diagnosis.
The process of transforming the non-linear magnetic field perturbations induced by radiowaves into linear reconstructions based on Radon and Fourier transforms has resulted in MR acquisitions in which intensities do not have a fixed meaning, not even within the same protocol, for the same body region, for images obtained on the same scanner, for the same patient, on the same day. This makes robust image interpretation and processing extremely challenging. The status quo of fine tuning an image processing algorithm with the ever-varying MRI intensity space could best be summarized as a "random search through the parameter space". This work demonstrates the implications of standardizing the contrast across multiple tissue types on the robustness and efficiency of image processing algorithms. Contrast standardization is performed using a prior-knowledge driven feature-guided, fast, non-linear equalization technique. Without loss of generality, skull stripping and brain tissue segmentation are considered in this investigation. Results show that the iterative image processing algorithms converge faster with minimal parameter tweaking and the abstractions are significantly better in the contrast standardized space than in the native stochastic space.
The degree of match between the delineation result produced by a segmentation technique and the ground truth can be assessed using robust "presence-absence" resemblance measures. Previously, we had investigated and introduced an exhaustive list of similarity indices for assessing multiple segmentation techniques. However, these measures are highly sensitive to even minor boundary perturbations which imminently manifest in the segmentations of random biphasic spaces reminiscent of the stochastic pore-solid distributions in the tissue engineering scaffolds. This paper investigates the ideas adapted from ecology to emphasize global resemblances and ignore minor local dissimilarities. It uses concepts from graph theory to perform controlled local mutations in order to maximize the similarities. The effect of this adjustment is investigated on a comprehensive list (forty nine) of similarity indices sensitive to the over- and under- estimation errors associated with image delineation tasks.
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The current scaffold fabrication techniques can be broadly classified into (a) conventional, irreproducible, stochastic techniques producing biomorphic "secundam naturam" but sub optimal scaffold architecture and (b) rapidly emerging, repeatable, computer-controlled Solid Freeform Fabrication (SFF) producing, "contra naturam" scaffold architecture. This paper presents an image-based scaffold optimization strategy based on microCT images of the conventional scaffolds. This approach, attempted and perfected for the first time, synergistically exploits the orthogonal techniques to create repeatable, biomorphic scaffolds with optimal scaffold geometry. The ramifications of this image based computer assisted intervention to improve the status quo of scaffold fabrication might contribute to the previously elusive deployment of promising benchside tissue analogs to the clinical bedside.
Tissue engineering is an interdisciplinary effort aimed at the repair and regeneration of biological tissues through the application and control of cells, porous scaffolds and growth factors. The regeneration of specific tissues guided by tissue analogous substrates is dependent on diverse scaffold architectural indices that can be derived quantitatively from the microCT and microMR images of the scaffolds. However, the randomness of pore-solid distributions in conventional stochastic scaffolds presents unique computational challenges. As a result, image-based characterization of scaffolds has been predominantly qualitative. In this paper, we discuss quantitative image-based techniques that can be used to compute the metrological indices of porous tissue engineering scaffolds. While bulk averaged quantities such as porosity and surface are derived directly from the optimal pore-solid delineations, the spatially distributed geometric indices are derived from the medial axis representations of the pore network. The computational framework proposed (to the best of our knowledge for the first time in tissue engineering) in this paper might have profound implications towards unraveling the symbiotic structure-function relationship of porous tissue engineering scaffolds.
The efficacy of image processing and analysis on skull stripped MR images vis-a-vis the original images is well established. Additionally, compliance with the Health Insurance Portability and Accountability Act (HIPAA) requires neuroimage repositories to anonymise the images before sharing them. This makes the non-trivial skull stripping process all the more significant. While a number of optimal approaches exist to strip the skull from T1-weighted MR images to the best of our knowledge, there is no simple, robust, fast, parameter free and fully automatic technique to perform the same on T2-weighted images. This paper presents a strategy to fill this gap. It employs a fast parameterization of the T2 image intensity onto a standardized T1 intensity scale. The parametric "T1-like" image obtained via the transformation, which takes only a few seconds to compute, is subsequently processed by any of the many T1-based brain extraction techniques to derive the brain mask. Masking the original T2 image with this brain mask strips the skull. By standardizing the intensity of the parametric image, preset algorithm-specific parameters (if any) could be used across multiple datasets. The proposed scheme has been used in a number of phantom and clinical T2 brain datasets to successfully strip the skull.
Existing similarity metrics to compare the accuracy of n-Dimensional image segmentation with the corresponding ground truth is restricted to a limited set of volume fractions which, by themselves, lack robustness. This paper introduces a comprehensive list of linear and non-linear similarity measures widely used in such diverse fields as ecology, toxicology and patent trending. These metrics based on the binary "absence/presence" data were computed for assessing the delineation of tissue engineering scaffold images into porous and polymeric space using a wide variety of thresholding techniques.
Tissue engineering involves regenerating damaged or malfunctioning organs using cells, biomolecules, and synthetic or natural scaffolds. Based on their intended roles, scaffolds can be injected as space-fillers or be preformed and implanted to provide mechanical support. Preformed scaffolds are biomimetic "trellis-like" structures which, on implantation and integration, act as tissue/organ surrogates. Customized, computer controlled, and reproducible preformed scaffolds can be fabricated using Computer Aided Design (CAD) techniques and rapid prototyping devices. A curved, monolithic construct with minimal surface area constitutes an efficient substrate geometry that promotes cell attachment, migration and proliferation. However, current CAD approaches do not provide such a biomorphic construct. We address this critical issue by presenting one of the very first physical realizations of minimal surfaces towards the construction of efficient unit-cell based tissue engineering scaffolds. Mask programmability, and optimal packing density of triply periodic minimal surfaces are used to construct the optimal pore geometry. Budgeted polygonization, and progressive minimal surface refinement facilitate the machinability of these surfaces. The efficient stress distributions, as deduced from the Finite Element simulations, favor the use of these scaffolds for orthopedic applications.
KEYWORDS: Image quality, Visualization, Image compression, Image processing, Distortion, Magnetic resonance imaging, Quality measurement, Medical imaging, Receivers, Signal to noise ratio
Image quality assessment plays a crucial role in many applications. Since the ultimate receiver in most of the image processing environments are humans, objective measures of quality that correlate with subjective perception are actively sought. Limited success has been achieved in deriving robust quantitative measures that can automatically and efficiently predict perceived image quality. The majority of structural similarity techniques are based on aggregation of local statistics within a local window. The choice of right window sizes to produce results compatible with visual perception is a challenging task with these methods. This paper introduces an intuitive metric that exploits the dominance of Fourier phase over magnitude in images. The metric is based on cross correlation of phase images to assess the image quality. Since the phase captures structural information, a phase-based similarity metric would best mimic the visual perception. With the availability of multi-dimensional Fourier and wavelet transforms, this metric can be directly used to assess quality of multi-dimensional images
Tissue engineering attempts to address the ever widening gap between the demand and supply of organ and tissue transplants using natural and biomimetic scaffolds. The regeneration of specific tissues aided by synthetic materials is dependent on the structural and morphometric properties of the scaffold. These properties can be derived non-destructively using quantitative analysis of high resolution microCT scans of scaffolds. Thresholding of the scanned images into polymeric and porous phase is central to the outcome of the subsequent structural and morphometric analysis. Visual thresholding of scaffolds produced using stochastic processes is inaccurate. Depending on the algorithmic assumptions made, automatic thresholding might also be inaccurate. Hence there is a need to analyze the performance of different techniques and propose alternate ones, if needed. This paper provides a quantitative comparison of different thresholding techniques for segmenting scaffold images. The thresholding algorithms examined include those that exploit spatial information, locally adaptive characteristics, histogram entropy information, histogram shape information, and clustering of gray-level information. The performance of different techniques was evaluated using established criteria, including misclassification error, edge mismatch, relative foreground error, and region non-uniformity. Algorithms that exploit local image characteristics seem to perform much better than those using global information.
Since the 1810 cranioscopy claims of F.J.Gall, human brain mapping has evolved into a challenging but fascinating scientific endeavor. The works of Jean Talairach in stereotaxic neurosurgery has revolutionized the use of brain atlases to identify the spatial locations of brain activations derived from functional images. The availability of digital print atlases, standardization of Talairach coordinates as means of reporting activation spots and the availability of Talairach daemon has led to the proliferation of publications in human brain mapping. However, the VOTL database used in the Talairach daemon employs nearest-neighbor interpolation of the sparse and unevenly spaced Talairach atlas. This exacerbates the already existing errors in brain mapping. This paper introduces the use of a shape based interpolation algorithm to derive a high resolution three dimensional Talairach Atlas. It uses a feature-guided approach for shape-based interpolation of porous and tortuous binary objects. The feature points are derived from the boundaries of the candidate sources and matched non-linearly using a robust outlier rejecting, non-linear point matching algorithm based on thin plate splines. The proposed scheme correctly handles objects with holes, large offsets and drastic invagination and significantly enhances the sparse Talairach Atlas. A similar approach applied to Schalten-brand and Wahren atlas would add appreciable value to functional neurosurgery.
Noise in medical images is common. It occurs during the image formation, recording, transmission, and subsequent image processing. Image smoothing attempts to locally preprocess these imagse primarily to suppress image noise by making use of the redundancy in the image data. 1D Savitzky-Golay filtering provides smoothing without loss of resolution by assuming that the distant points have significant redundancy. This redundancy is exploited to reduce the noise level. Using this assumed redundancy, the underlying functin is locally fitted by a polynomial whose coefficients are data independent and hence can be calculated in advance. Geometric representations of data as a patches and surfaces have been used in volumetric modeling and reconstruction. Similar representations could also be used in image smoothing. This paper shows the 2D and 3D extensions of 1D Savitzky-Golay filters. The idea is to fit a 2D/3D polynomial to a 2D/3D sub region of the image. As in the 1D case, the coefficients of the polynomial are computed a priori with a linear filter. The filter coefficients preserve higher moments. The coefficients always have a central positive lobe with smaller outlying corrections of both positive and negative magnitudes. To show the efficacy of this smoothing, it is used in-line with volume rendering while computing the sampling points and the gradient.
The goal of unsupervised multispectral classification is to precisely identify objects in a scene by incorporating the complementary information available in spatially registered multispectral images. If the channels are less noisy and are as statistically independent as possible, the performance of the unsupervised classifier will be better. The discriminatory power of the classifier also increases if the individual channels have good contrast. However, enhancing the contrast of the channels individually does not necessarily produce good results. Hence there is a need to preprocess the channels such that they have a high contrast and are as statistically independent as possible. Independent Component Analysis (ICA) is a signal processing technique that expresses a set of random variables as linear combinations of statistically independent component variables. The estimation of ICA typically involves formulating a cost function which measures nongaussianity/ gaussianity which is subsequently maximized or minimized. The resulting images are maximally statistically independent and have high contrast. Unsupervised classification on these images captures more information than on the original images. In preliminary studies, we were able to classify detailed neuroanatomical structures such as the putamen and choroid plexus, from the independent component channels. These structures could not be delineated from the original images using the same classifier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.