The recent rapid success of deep convolutional neural networks (CNN) on many computer vision tasks largely benefits from the well-annotated Pascal VOC, ImageNet, and MS COCO datasets. However, it is challenging to get ImageNetlike annotations (1000 classes) in the medical imaging domain due to the lack of clinical training in the lay crowdsourcing community. We address this problem by presenting a semi-supervised training method for neural networks with true-class and pseudo-class (un-annotated class) labels on partially annotated training data. The true-class labels are supervised annotations from clinical professionals. The pseudo-class labels are unsupervised clustering of unannotated data. Our method rests upon the hypothesis of better coherent annotations with discriminative classes leading to better trained CNN models. We validated our method on extra-coronary calcification detection in low dose CT scans. The CNN trained with true-class and 10 pseudo-classes achieved a 78.0% sensitivity at 10 false positives per scan (0.3 false positive per slice), which significantly outperformed the CNN trained with true-class only (sensitivity=25.0% at 10 false positives per patient).
Pericardial effusion on CT scans demonstrates very high shape and volume variability and very low contrast to adjacent structures. This inhibits traditional automated segmentation methods from achieving high accuracies. Deep neural networks have been widely used for image segmentation in CT scans. In this work, we present a two-stage method for pericardial effusion localization and segmentation. For the first step, we localize the pericardial area from the entire CT volume, providing a reliable bounding box for the more refined segmentation step. A coarse-scaled holistically-nested convolutional networks (HNN) model is trained on entire CT volume. The resulting HNN per-pixel probability maps are then threshold to produce a bounding box covering the pericardial area. For the second step, a fine-scaled HNN model is trained only on the bounding box region for effusion segmentation to reduce the background distraction. Quantitative evaluation is performed on a dataset of 25 CT scans of patient (1206 images) with pericardial effusion. The segmentation accuracy of our two-stage method, measured by Dice Similarity Coefficient (DSC), is 75.59±12.04%, which is significantly better than the segmentation accuracy (62.74±15.20%) of only using the coarse-scaled HNN model.
Artery calcification is observed commonly in elderly patients, especially in patients with chronic kidney disease, and may affect coronary, carotid and peripheral arteries. Vascular calcification has been associated with many clinical outcomes. Manual identification of calcification in CT scans requires substantial expert interaction, which makes it time-consuming and infeasible for large-scale studies. Many works have been proposed for coronary artery calcification detection in cardiac CT scans. In these works, coronary artery extraction is commonly required for calcification detection. However, there are few works about abdominal or pelvic artery calcification detection. In this work, we present a method for automatic pelvic artery calcification detection on CT scan. This method uses the recent advanced faster region-based convolutional neural network (R-CNN) to directly identify artery calcification without a need for artery extraction since pelvic artery extraction itself is challenging. Our method first generates category-independent region proposals for each slice of the input CT scan using region proposal networks (RPN). Then, each region proposal is jointly classified and refined by softmax classifier and bounding box regressor. We applied the detection method to 500 images from 20 CT scans of patients for evaluation. The detection system achieved a 77.4% average precision and a 85% sensitivity at 1 false positive per image.
Colitis is inflammation of the colon due to neutropenia, inflammatory bowel disease (such as Crohn disease), infection and immune compromise. Colitis is often associated with thickening of the colon wall. The wall of a colon afflicted with colitis is much thicker than normal. For example, the mean wall thickness in Crohn disease is 11-13 mm compared to the wall of the normal colon that should measure less than 3 mm. Colitis can be debilitating or life threatening, and early detection is essential to initiate proper treatment. In this work, we apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals to detect potential colitis on CT scans. Our method first generates around 3000 category-independent region proposals for each slice of the input CT scan using selective search. Then, a fixed-length feature vector is extracted from each region proposal using a CNN. Finally, each region proposal is classified and assigned a confidence score with linear SVMs. We applied the detection method to 260 images from 26 CT scans of patients with colitis for evaluation. The detection system can achieve 0.85 sensitivity at 1 false positive per image.
The thyroid is an endocrine gland that regulates metabolism. Thyroid image analysis plays an important role in both diagnostic radiology and radiation oncology treatment planning. Low tissue contrast of the thyroid relative to surrounding anatomic structures makes manual segmentation of this organ challenging. This work proposes a fully automated system for thyroid segmentation on CT imaging. Following initial thyroid segmentation with multiatlas joint label fusion, a random forest (RF) algorithm was applied. Multiatlas label fusion transfers labels from labeled atlases and warps them to target images using deformable registration. A consensus atlas solution was formed based on optimal weighting of atlases and similarity to a given target image. Following the initial segmentation, a trained RF classifier employed voxel scanning to assign class-conditional probabilities to the voxels in the target image. Thyroid voxels were categorized with positive labels and nonthyroid voxels were categorized with negative labels. Our method was evaluated on CT scans from 66 patients, 6 of which served as atlases for multiatlas label fusion. The system with independent multiatlas label fusion method and RF classifier achieved average dice similarity coefficients of 0.72±0.13 and 0.57±0.14, respectively. The system with sequential multiatlas label fusion followed by RF correction increased the dice similarity coefficient to 0.76±0.11 and improved the segmentation accuracy.
The thyroid gland plays an important role in clinical practice, especially for radiation therapy treatment planning. For patients with head and neck cancer, radiation therapy requires a precise delineation of the thyroid gland to be spared on the pre-treatment planning CT images to avoid thyroid dysfunction. In the current clinical workflow, the thyroid gland is normally manually delineated by radiologists or radiation oncologists, which is time consuming and error prone. Therefore, a system for automated segmentation of the thyroid is desirable. However, automated segmentation of the thyroid is challenging because the thyroid is inhomogeneous and surrounded by structures that have similar intensities. In this work, the thyroid gland segmentation is initially estimated by multi-atlas label fusion algorithm. The segmentation is refined by supervised statistical learning based voxel labeling with a random forest algorithm. Multiatlas label fusion (MALF) transfers expert-labeled thyroids from atlases to a target image using deformable registration. Errors produced by label transfer are reduced by label fusion that combines the results produced by all atlases into a consensus solution. Then, random forest (RF) employs an ensemble of decision trees that are trained on labeled thyroids to recognize features. The trained forest classifier is then applied to the thyroid estimated from the MALF by voxel scanning to assign the class-conditional probability. Voxels from the expert-labeled thyroids in CT volumes are treated as positive classes; background non-thyroid voxels as negatives. We applied this automated thyroid segmentation system to CT scans of 20 patients. The results showed that the MALF achieved an overall 0.75 Dice Similarity Coefficient (DSC) and the RF classification further improved the DSC to 0.81.
Station-labeling of mediastinal lymph nodes is typically performed to identify the location of enlarged nodes for cancer staging. Stations are usually assigned in clinical radiology practice manually by qualitative visual assessment on CT scans, which is time consuming and highly variable. In this paper, we developed a method that automatically recognizes the lymph node stations in thoracic CT scans based on the anatomical organs in the mediastinum. First, the trachea, lungs, and spines are automatically segmented to locate the mediastinum region. Then, eight more anatomical organs are simultaneously identified by multi-atlas segmentation. Finally, with the segmentation of those anatomical organs, we convert the text definitions of the International Association for the Study of Lung Cancer (IASLC) lymph node map into patient-specific color-coded CT image maps. Thus, a lymph node station is automatically assigned to each lymph node. We applied this system to CT scans of 86 patients with 336 mediastinal lymph nodes measuring equal or greater than 10 mm. 84.8% of mediastinal lymph nodes were correctly mapped to their stations.
Enlarged lymph nodes may indicate the presence of illness. Therefore, identification and measurement of lymph nodes provide essential biomarkers for diagnosing disease. Accurate automatic detection and measurement of lymph nodes can assist radiologists for better repeatability and quality assurance, but is challenging as well because lymph nodes are often very small and have a highly variable shape. In this paper, we propose to tackle this problem via supervised statistical learning-based robust voxel labeling, specifically the random forest algorithm. Random forest employs an ensemble of decision trees that are trained on labeled multi-class data to recognize the data features and is adopted to handle lowlevel image features sampled and extracted from 3D medical scans. Here we exploit three types of image features (intensity, order-1 contrast and order-2 contrast) and evaluate their effectiveness in random forest feature selection setting. The trained forest can then be applied to unseen data by voxel scanning via sliding windows (11×11×11), to assign the class label and class-conditional probability to each unlabeled voxel at the center of window. Voxels from the manually annotated lymph nodes in a CT volume are treated as positive class; background non-lymph node voxels as negatives. We show that the random forest algorithm can be adapted and perform the voxel labeling task accurately and efficiently. The experimental results are very promising, with AUCs (area under curve) of the training and validation ROC (receiver operating characteristic) of 0.972 and 0.959, respectively. The visualized voxel labeling results also confirm the validity.
Automated small bowel segmentation is essential for computer-aided diagnosis (CAD) of small bowel pathology, such as tumor detection and pre-operative planning. We previously proposed a method to segment the small bowel using the mesenteric vasculature as a roadmap. The method performed well on small bowel segmentation but produced many false positives, most of which were located on the colon. To improve the accuracy of small bowel segmentation, we propose a semi-automated method with minimum interaction to distinguish the colon from the small bowel. The method utilizes anatomic knowledge about the mesenteric vasculature and a statistical method of colon detection. First, anatomic labeling of the mesenteric arteries is used to identify the arteries supplying the colon. Second, a statistical detector is created by combining two colon probability maps. One probability map is of the colon location and is generated from colon centerlines generated from CT colonography (CTC) data. Another probability map is of 3D colon texture using Haralick features and support vector machine (SVM) classifiers. The two probability maps are combined to localize colon regions, i.e., voxels having high probabilities on both maps were labeled as colon. Third, colon regions identified by anatomical labeling and the statistical detector are removed from the original results of small bowel segmentation. The method was evaluated on 11 abdominal CT scans of patients suspected of having carcinoid tumors. The reference standard consisted of manually-labeled small bowel segmentation. The method reduced the voxel-based false positive rate of small bowel segmentation from 19.7%±3.9% to 5.9%±2.3%, with two-tailed P-value < 0.0001.
Lymph nodes play an important role in clinical practice but detection is challenging due to low contrast surrounding structures and variable size and shape. We propose a fully automatic method for mediastinal lymph node detection on thoracic CT scans. First, lungs are automatically segmented to locate the mediastinum region. Shape features by Hessian analysis, local scale, and circular transformation are computed at each voxel. Spatial prior distribution is determined based on the identification of multiple anatomical structures (esophagus, aortic arch, heart, etc.) by using multi-atlas label fusion. Shape features and spatial prior are then integrated for lymph node detection. The detected candidates are segmented by curve evolution. Characteristic features are calculated on the segmented lymph nodes and support vector machine is utilized for classification and false positive reduction. We applied our method to 20 patients with 62 enlarged mediastinal lymph nodes. The system achieved a significant improvement with 80% sensitivity at 8 false positives per patient with spatial prior compared to 45% sensitivity at 8 false positives per patient without a spatial prior.
Accurate automatic detection and segmentation of abdominal organs from CT images is important for quantitative and qualitative organ tissue analysis as well as computer-aided diagnosis. The large variability of organ locations, the spatial interaction between organs that appear similar in medical scans and orientation and size variations are among the major challenges making the task very difficult. The pancreas poses these challenges in addition to its flexibility which allows for the shape of the tissue to vastly change. Due to the close proximity of the pancreas to numerous surrounding organs within the abdominal cavity the organ shifts according to the conditions of the organs within the abdomen, as such the pancreas is constantly changing. Combining these challenges with typically found patient-to-patient variations and scanning conditions the pancreas becomes harder to localize. In this paper we focus on three abdominal vessels that almost always abut the pancreas tissue and as such useful landmarks to identify the relative location of the pancreas. The splenic and portal veins extend from the hila of the spleen and liver, respectively, travel through the abdominal cavity and join at a position close to the head of the pancreas known as the portal confluence. A third vein, the superior mesenteric vein, anastomoses with the other two veins at the portal confluence. An automatic segmentation framework for obtaining the splenic vein, portal confluence and superior mesenteric vein is proposed using 17 contrast enhanced computed-tomography datasets. The proposed method uses outputs from the multi-organ multi-atlas label fusion and Frangi vesselness filter to obtain automatic seed points for vessel tracking and generation of statistical models of the desired vessels. The approach shows ability to identify the vessels and improve localization of the pancreas within the abdomen.
Segmentation of the musculature is very important for accurate organ segmentation, analysis of body composition, and localization of tumors in the muscle. In research fields of computer assisted surgery and computer-aided diagnosis
(CAD), muscle segmentation in CT images is a necessary pre-processing step. This task is particularly challenging due
to the large variability in muscle structure and the overlap in intensity between muscle and internal organs. This problem has not been solved completely, especially for all of thoracic, abdominal and pelvic regions. We propose an automated system to segment the musculature on CT scans. The method combines an atlas-based model, an active contour model and prior segmentation of fat and bones. First, body contour, fat and bones are segmented using existing methods. Second, atlas-based models are pre-defined using anatomic knowledge at multiple key positions in the body to handle the large variability in muscle shape. Third, the atlas model is refined using active contour models (ACM) that are constrained using the pre-segmented bone and fat. Before refining using ACM, the initialized atlas model of next slice is updated using previous atlas. The muscle is segmented using threshold and smoothed in 3D volume space. Thoracic, abdominal and pelvic CT scans were used to evaluate our method, and five key position slices for each case were selected and manually labeled as the reference. Compared with the reference ground truth, the overlap ratio of true positives is 91.1%±3.5%, and that of false positives is 5.5%±4.2%.
Computed tomographic colonography (CTC) is a minimally invasive technique for colonic polyps and cancer
screening. The marginal artery of the colon, also known as the marginal artery of Drummond, is the blood
vessel that connects the inferior mesenteric artery with the superior mesenteric artery. The marginal artery
runs parallel to the colon for its entire length, providing the blood supply to the colon. Detecting the marginal
artery may benefit computer-aided detection (CAD) of colonic polyp. It can be used to identify teniae coli
based on their anatomic spatial relationship. It can also serve as an alternative marker for colon localization,
in case of colon collapse and inability to directly compute the endoluminal centerline. This paper proposes an
automatic method for marginal artery detection on CTC. To the best of our knowledge, this is the first work
presented for this purpose. Our method includes two stages. The first stage extracts the blood vessels in the
abdominal region. The eigenvalue of Hessian matrix is used to detect line-like structures in the images. The
second stage is to reduce the false positives in the first step. We used two different masks to exclude the false
positive vessel regions. One is a dilated colon mask which is obtained by colon segmentation. The other is an
eroded visceral fat mask which is obtained by fat segmentation in the abdominal region. We tested our
method on a CTC dataset with 6 cases. Using ratio-of-overlap with manual labeling of the marginal artery as
the standard-of-reference, our method yielded true positive, false positive and false negative fractions of 89%,
33%, 11%, respectively.
Patients with chronic lymphocytic leukemia (CLL) have an increased frequency of axillary lymphadenopathy. Pretreatment
CT scans can be used to upstage patients at the time of presentation and post-treatment CT scans can reduce
the number of complete responses. In the current clinical workflow, the detection and diagnosis of lymph nodes is
usually performed manually by examining all slices of CT images, which can be time consuming and highly dependent
on the observer's experience. A system for automatic lymph node detection and measurement is desired. We propose a
computer aided detection (CAD) system for axillary lymph nodes on CT scans in CLL patients. The lung is first
automatically segmented and the patient's body in lung region is extracted to set the search region for lymph nodes.
Multi-scale Hessian based blob detection is then applied to detect potential lymph nodes within the search region. Next,
the detected potential candidates are segmented by fast level set method. Finally, features are calculated from the
segmented candidates and support vector machine (SVM) classification is utilized for false positive reduction. Two
blobness features, Frangi's and Li's, are tested and their free-response receiver operating characteristic (FROC) curves
are generated to assess system performance. We applied our detection system to 12 patients with 168 axillary lymph
nodes measuring greater than 10 mm. All lymph nodes are manually labeled as ground truth. The system achieved
sensitivities of 81% and 85% at 2 false positives per patient for Frangi's and Li's blobness, respectively.
Segmentation of the mesenteric vasculature has important applications for evaluation of the small bowel. In particular, it
may be useful for small bowel path reconstruction and precise localization of small bowel tumors such as carcinoid.
Segmentation of the mesenteric vasculature is very challenging, even for manual labeling, because of the low contrast
and tortuosity of the small blood vessels. Many vessel segmentation methods have been proposed. However, most of
them are designed for segmenting large vessels. We propose a semi-automated method to extract the mesenteric
vasculature on contrast-enhanced abdominal CT scans. First, the internal abdominal region of the body is automatically
identified. Second, the major vascular branches are segmented using a multi-linear vessel tracing method. Third, small
mesenteric vessels are segmented using multi-view multi-scale vesselness enhancement filters. The method is insensitive
to image contrast, variations of vessel shape and small occlusions due to overlapping. The method could automatically
detect mesenteric vessels with diameters as small as 1 mm. Compared with the standard-of-reference manually labeled
by an expert radiologist, the segmentation accuracy (recall rate) for the whole mesenteric vasculature was 82.3% with a
3.6% false positive rate.
Many malignant processes cause abdominal lymphadenopathy, and computed tomography (CT) has become the primary
modality for its detection. A lymph node is considered enlarged (swollen) if it is more than 1 centimeter in diameter.
Which lymph nodes are swollen depends on the type of disease and the body parts involved. Identifying their locations is
very important to determine the possible cause. In the current clinical workflow, the detection and diagnosis of enlarged
lymph nodes is usually performed manually by examining all slices of CT images, which can be error-prone and time
consuming. 3D blob enhancement filter is a usual way for computer-aided node detection. We proposed an improved
blob detection method for automatic lymph node detection in contrast-enhanced abdominal CT images. First, spine was
automatically extracted to indicate abdominal region. Since lymph nodes are usually next to blood vessels, abdominal
blood vessels were then segmented as a reference to set the search region for lymph nodes. Next, lymph node candidates
were generated by object-scale Hessian analysis. Finally the detected candidates were segmented and some prior
anatomical knowledge was utilized for false positive reduction. We applied our method to 9 patients with 11 enlarged
lymph nodes and compared the results with the performance of the original multi-scale Hessian analysis. The
sensitivities were 91% and 82% for our method and multi-scale Hessian analysis, respectively. The false positive rates
per patient were 17 and 28 for our method and multi-scale Hessian analysis, respectively. Our results indicated that
computer-aided lymph node detection with this blob detector may yield a high sensitivity and a relatively low FP rate in
abdominal CT.
Many malignant processes cause abdominal lymphadenopathy, and computed tomography (CT) has become the primary
modality for its detection. A lymph node is considered enlarged (swollen) if it is more than 1 centimeter in diameter.
Which lymph nodes are swollen depends on the type of disease and the body parts involved. Identifying their locations is
very important to determine the possible cause. In the current clinical workflow, the detection and diagnosis of enlarged
lymph nodes is usually performed manually by examining all slices of CT images, which can be error-prone and time
consuming. 3D blob enhancement filter is a usual way for computer-aided node detection. We proposed a new 3D blob
detector for automatic lymph node detection in contrast-enhanced abdominal CT images. Since lymph nodes are
usually next to blood vessels, abdominal blood vessels were first segmented as a reference to set the search region for
lymph nodes. Then a new detection response measure, blobness, is defined based on eigenvalues of the Hessian matrix
and the object scale in our new blob detector. Voxels with higher blobness were clustered as lymph node candidates.
Finally some prior anatomical knowledge was utilized for false positive reduction. We applied our method to 5 patients
and compared the results with the performance of the original blobness definition. Both methods achieved sensitivity of
83.3% but the false positive rates per patient were 14 and 26 for our method and the original method, respectively. Our
results indicated that computer-aided lymph node detection with this new blob detector may yield a high sensitivity and
a relatively low FP rate in abdominal CT.
CT colonography (CTC) is a feasible and minimally invasive method for the detection of colorectal polyps and cancer
screening. Computer-aided detection (CAD) of polyps has improved consistency and sensitivity of virtual colonoscopy
interpretation and reduced interpretation burden. A CAD system typically consists of four stages: (1) image preprocessing
including colon segmentation; (2) initial detection generation; (3) feature selection; and (4) detection
classification. In our experience, three existing problems limit the performance of our current CAD system. First, highdensity
orally administered contrast agents in fecal-tagging CTC have scatter effects on neighboring tissues. The
scattering manifests itself as an artificial elevation in the observed CT attenuation values of the neighboring tissues. This
pseudo-enhancement phenomenon presents a problem for the application of computer-aided polyp detection, especially
when polyps are submerged in the contrast agents. Second, general kernel approach for surface curvature computation in
the second stage of our CAD system could yield erroneous results for thin structures such as small (6-9 mm) polyps and
for touching structures such as polyps that lie on haustral folds. Those erroneous curvatures will reduce the sensitivity of
polyp detection. The third problem is that more than 150 features are selected from each polyp candidate in the third
stage of our CAD system. These high dimensional features make it difficult to learn a good decision boundary for
detection classification and reduce the accuracy of predictions. Therefore, an improved CAD system for polyp detection
in CTC data is proposed by introducing three new techniques. First, a scale-based scatter correction algorithm is applied
to reduce pseudo-enhancement effects in the image pre-processing stage. Second, a cubic spline interpolation method is
utilized to accurately estimate curvatures for initial detection generation. Third, a new dimensionality reduction
classifier, diffusion map and local linear embedding (DMLLE), is developed for classification and false positives (FP)
reduction. Performance of the improved CAD system is evaluated and compared with our existing CAD system (without
applying those techniques) using CT scans of 1186 patients. These scans are divided into a training set and a test set. The
sensitivity of the improved CAD system increased 18% on training data at a rate of 5 FPs per patient and 15% on test
data at a rate of 5 FPs per patient. Our results indicated that the improved CAD system achieved significantly better
performance on medium-sized colonic adenomas with higher sensitivity and lower FP rate in CTC.
KEYWORDS: 3D modeling, Image segmentation, 3D image processing, Medical imaging, Magnetic resonance imaging, Breast, Statistical analysis, Computer programming, Data modeling, Binary data
Active Shape Models (ASM) have been applied to various segmentation tasks in medical imaging, most successfully
in 2D segmentation of objects that have a fairly consistent shape. However, several difficulties arise when
extending 2D ASM to 3D: (1) difficulty in 3D labeling, (2) the requirement of a large number of training samples,
(3) the challenging problem of landmark correspondence in 3D, (4) inefficient initialization and optimization in
3D. This paper addresses the 3D segmentation problem by using a small number of effective 2D statistical models
called oriented ASM (OASM). We demonstrate that a small number of 2D OASM models, which are derived from
a chunk of a contiguous set of slices, are sufficient to capture the shape variation between slices and individual
objects. Each model can be matched rapidly to a new slice by using the OASM algorithm1. Our experiments in
segmenting breast and bone of the foot in MR images indicate the following: (1) The accuracy of segmentation
via our method is much better than that of 2DASM-based segmentation methods.2 (2) Far fewer landmarks are
required compared with thousands of landmarks needed in true 3D ASM. Therefore, far fewer training samples
are required to capture details. (3) Our method is computationally slightly more expensive than the 2D method2
owing to its 2 level dynamic programming (2LDP) algorithm.
KEYWORDS: Magnetic resonance imaging, Tissues, Image segmentation, Image processing, Medical imaging, Bismuth, Image analysis, Data modeling, Scanners, Scene simulation
An automatic, simple, and image intensity standardization-based strategy for correcting background inhomogeneity in MR images is presented in this paper. Image intensities are first transformed to a standard intensity gray scale by a standardization process. Different tissue sample regions are then obtained from the standardized image by simply thresholding based on fixed intensity intervals. For each tissue region, a polynomial is fitted to the estimated discrete background intensity variation. Finally, a combined polynomial is determined and used for correcting the intensity inhomogeneity in the whole image. The above procedure is repeated on the corrected image iteratively until the size of the extracted tissue regions does not change significantly in two successive iterations. Intensity scale standardization is effected to make sure that the corrected image is not biased by the fitting strategy. The method has been tested on a number of simulated and clinical MR images. These tests and a comparison with the method of non-parametric non-uniform intensity normalization (N3) indicate that the method is effective for background intensity inhomogeneity correction and may have a slight edge over the N3 method.
Active Shape Models (ASM) are widely employed for recognizing
anatomic structures and for delineating them in medical images. In
this paper, we present a novel strategy called Oriented Active
Shape Models (OASM) in an attempt to overcome the following three
major limitations of ASM: (1) poor delineation accuracy, (2) the
requirement of a large number of landmarks, (3) the problem of
sensitivity to search range to recognize the object boundary. OASM
effectively combines the rich statistical shape information
embodied in ASM with the boundary orientedness property and the
globally optimal delineation capability of the live wire
methodology of boundary segmentation. The latter allow live wire
to effectively separate an object boundary from other non object
boundaries with similar properties that come very close in the
image domain. Our approach leads us to a 2-level dynamic
programming method, wherein the first level corresponds to
boundary recognition and the second level corresponds to boundary
delineation. Our experiments in segmenting breast, liver, bones of
the foot, and cervical vertebrae of the spine in MR and CT images
indicate the following: (1) The accuracy of segmentation via OASM
is considerably better than that of ASM. (2) The number of
landmarks can be reduced by a factor of 3 in OASM over that in
ASM. (3) OASM becomes largely independent of search range. All
three benefits of OASM ensue mainly from the severe constraints
brought in by the boundary-orientedness property of live wire and
the globally optimal solution of dynamic programming.
KEYWORDS: Image segmentation, Bone, Magnetic resonance imaging, 3D image processing, 3D modeling, Spine, 3D displays, Computed tomography, Binary data, Medical imaging
There are several medical application areas that require the segmentation and separation of the component bones of joints in a sequence of acquired images of the joint under various loading conditions, our own target area being joint motion analysis. This is a challenging problem due to the proximity of bones at the joint, partial volume effects, and other imaging modality-specific factors that confound boundary contrast. A model-based strategy is proposed in this paper wherein a rigid model of the bone is generated from a segmentation of the bone in the image corresponding to one position of the joint by using the live wire method. In other images of the joint, this model is used to search for the same bone by minimizing an energy functional that utilizes both boundary- and region-based information. An evaluation of the method by utilizing a total of 60 data sets on MR and CT images of the ankle complex and cervical spine indicates that the segmentations agree very closely with the live wire segmentations yielding true positive and false positive volume fractions in the range 89-97% and 0.2-0.7%. The method requires 1-2 minutes of operator time and 6-7 minutes of computer time, which makes it significantly more efficient than live wire -- the only method currently available for the task.
This project is a result of a marriage between two independent activities that existed for quite some time within two collaborative groups: (1) the development of a mechanical linkage device and its utilization to test externally the flexibility characteristics of the ankle complex under load; (2) the development of an MR imaging and image analysis methodology to characterize the internal 3D movements of bones of the ankle complex. In the resulting methodology, which we term stress MRI (sMRI for short), the ankle is MR imaged in various foot configurations while held in place by the linkage device with controlled load proven to detect hindfoot instability. Subsequently the acquired images are subjected to a series of image processing and analysis steps to yield a set of parameters to describe the morphology, architecture, and kinematics of the bones of the ankle complex. These parameters are computed from images acquired for 14 normal ankles (of 7 subjects, including the left and the right ankle) and for 8 cadaveric ankles, the latter in five different situations consisting of intact ankle, two ligaments - the CFL and ATFL - sectioned serially, and then after the two ligaments are surgically reconstructed by using two procedures. The results indicate that (i) there is considerable left-to-right symmetry in the ankles; (ii) ligament damage causes a few parameters to change significantly; (iii) both ankle and subtalar motions occur in inversion and anterior drawer; (iv) in vitro motion is generally greater than in vivo motion; (v) the surgical procedures are effective in achieving normalcy, yet there are differences in their performance.
A simple,non-iterative,membership-based method for multiprotocol brain magnetic resonance image segmentation has been developed. The intensity in homogeneity correction and MR intensity standardization techniques are used first to make the MR image intensities have a tissue-specific meaning. The mean intensity vector and covariance matrix of each brain tissue are then estimated and fixed. Vectorial scale-based fuzzy connectedness and certain morphological operations are utilized to geernate the brain intracranial mask. The fuzzy membership value of each voxel for each brain tissue is then estimated within the intracranial mask via a multivariate Gaussian model. Finally, a maximum likelihood criterion with spatial constraints taken into account is utilized in classifying all voxels in the intracranial mask into gray matter, white matter, and cerebrospinal fluid. This method has been tested on 10 clinical MR data sets. These tests and a comparison with the method of C-means and fuzzy C-means clustering indicated the effectiveness of the method.
KEYWORDS: Image segmentation, Bone, 3D modeling, Magnetic resonance imaging, 3D image processing, Kinematics, Binary data, Medical imaging, Tissues, Image processing
Our ongoing project on the kinematic analysis of the joints of the foot and ankle via magnetic resonance (MR) imaging requires segmentation of bones in images acquired at different positions of the joint. This segmentation requires an extensive amount of operator time, especially for the current study involving 300 scenes and 4 bones to be segmented in each scene. A 3D model-based segmentation approach is developed wherein the model is generated from the segmentation of a specific bone from one scene and is used for segmenting the same bone in all other scenes. This method works in two sequential steps. In the first step, the patient- and bone-specific model is generated by segmenting the target bone from one scene using the live wire method. In the second step, the segmentation of the same bone for the same patient in a scene corresponding to another ankle position is obtained by finding an optimum rigid body transformation to minimize its fitting energy. The fitting energy utilized captures both boundary- and region-based information in a unified manner. This method has produced satisfactory results for the 30 pairs of images used for evaluation. The model-based segmentation method will significantly reduce the operator time required by our ongoing study.
An automatic, acquisition-protocol-independent, entirely image-based strategy for correcting background intensity variation in medical images has been developed. Local scale - a fundamental image property that is derivable entirely from the image and that does not require any prior knowledge about the imaging protocol or object material property distributions - is used to obtain a set of homogeneous regions, no matter what each region is, and to fit a 2nd degree polynomial to the intensity variation within them. This polynomial is used to correct the intensity variation. The above procedure is repeated for the corrected image until the size of segmented homogeneous regions does not change significantly from that in the previous iteration. Intensity scale standardization is effected to make sure that the corrected images are not biased by the fitting strategy. The method has been tested on 1000 3D mathematical phantoms, which include 5 levels each of blurring and noise and 4 types of background variation - additive and multiplicative Gaussian and ramp. It has also been tested on 10 clinical MRI data sets of the brain. These tests, and a comparison with the method of homomorphic filtering, indicate the effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.