Purpose: Multisource images are interesting in medical imaging. Indeed, multisource images enable the use of complementary information of different sources such as for T1 and T2 modalities in MRI imaging. However, such multisource data can also be subject to redundancy and correlation. The question is how to efficiently fuse the multisource information without reinforcing the redundancy. We propose a method for segmenting multisource images that are statistically correlated.
Approach: The method that we propose is the continuation of a prior work in which we introduce the copula model in hidden Markov fields (HMF). To achieve the multisource segmentations, we use a functional measure of dependency called “copula.” This copula is incorporated in the conditionally random fields (CRF). Contrary to HMF, where we consider a prior knowledge on the hidden states modeled by an HMF, in CRF, there is no prior information and only the distribution of the hidden states conditionally to the observations can be known. This conditional distribution depends on the data and can be modeled by an energy function composed of two terms. The first one groups the voxels having similar intensities in the same class. As for the second term, it encourages a pair of voxels to be in the same class if the difference between their intensities is not too big.
Results: A comparison between HMF and CRF is performed via theory and experimentations using both simulated and real data from BRATS 2013. Moreover, our method is compared with different state-of-the-art methods, which include supervised (convolutional neural networks) and unsupervised (hierarchical MRF). Our unsupervised method gives similar results as decision trees for synthetic images and as convolutional neural networks for real images; both methods are supervised.
Conclusions: We compare two statistical methods using the copula: HMF and CRF to deal with multicorrelated images. We demonstrate the interest of using copula. In both models, the copula considerably improves the results compared with individual segmentations.
The COVID-19 pandemic continues spreading rapidly around the world and has caused devastating outcomes towards the health of the global population. The reverse transcription-polymerase chain reaction (RT-PCR) test, as the only current gold standard for screening infected cases, yields a relatively high false positive rate and low sensitivity on asymptomatic subjects. The use of chest X-ray radiography (CXR) images coupled with deep- learning (DL) methods for image classification, represents an attractive adjunct to or replacement for RT-PCR testing. However, its usage has been widely debated over the past few months and its potential effectiveness remains unclear. A number of DL-based methods have been proposed to classify the COVID-19 cases from the normal ones, achieving satisfying high performance. However, these methods show limited performance on the multi-class classification task for COVID-19, pneumonia and normal cases, mainly due to two factors: 1) the textures in COVID-19 CXR images are extremely similar to that of pneumonia cases, and 2) there are much fewer COVID-19 cases compared to the other two classes in the public domain. To address these challenges, a novel framework is proposed to learn a deep convolutional neural network (DCNN) model for accurately classifying COVID-19 and pneumonia cases from other normal cases by the use of CXR images. In addition to training the model by use of conventional classification loss which measures classification accuracy, the proposed method innovatively employs a reconstruction loss measuring image fidelity and an adversarial loss measuring class distribution fidelity to assist in the training of the main DCNN model to extract more informative features to support multi-class classification. The experiment results on a COVID-19 dataset demonstrate the superior classification performance of the proposed method in terms of accuracy compared to other existing DL-based methods. The experiment on another cancer dataset further implies the potential of applying the proposed methods in other medical imaging applications.
Segmentation of organs at risk (OAR) in computed tomography (CT) is of vital importance in radiotherapy treatment. This task is time consuming and for some organs, it is very challenging due to low-intensity contrast in CT. We propose a framework to perform the automatic segmentation of multiple OAR: esophagus, heart, trachea, and aorta. Different from previous works using deep learning techniques, we make use of global localization information, based on an original distance map that yields not only the localization of each organ, but also the spatial relationship between them. Instead of segmenting directly the organs, we first generate the localization map by minimizing a reconstruction error within an adversarial framework. This map that includes localization information of all organs is then used to guide the segmentation task in a fully convolutional setting. Experimental results show encouraging performance on CT scans of 60 patients totaling 11,084 slices in comparison with other state-of-the-art methods.
Magnetic Resonance Image (MRI) is widely used in radiology diagnosis, especially in pathology detection in human brain. Most of the methods now applied to automatically segment brain tumors rely on T1-weighted sequences exclusively despite the fact that the imaging agent is multi-spectral. The work focuses on the integration or fusion of information provided by each sequence, i.e. T1, T2 and PD. Based on the fuzzy aggregators proposed in fuzzy theory, a system integrating all these information is established. The paper discusses some famous operators, their properties and application in tumor segmentation. In particular, Davies-Bouldin index is used to determine the parameters of the parametric operations. The result shows the importance of data fusion in segmentation process, discovers that T-norms are less robust to noise compared with mean operators. Meanwhile, weights allocated illustrate the order of importance of each spectrum in pathology detection, and are in agreement with their characteristic.
An information fusion based fuzzy segmentation method applied to Magnetic Resonance Images (MRI) is proposed in this paper. It can automatically extract the normal and abnormal tissues of human brain from multispectral images such as T1-weighted, T2-weighted and Proton Density (PD) feature images. Fuzzy models of normal tissues corresponding to three MRI sequences images are derived from histogram according to a priori knowledge. Three different functions are chosen to calculate the fuzzy models of abnormal tissues. Then, the fuzzy features extracted by these fuzzy models are joined by a fuzzy relation operator which represents their fuzzy feature fusion. The final segmentation result is obtained by a fuzzy region growing based fuzzy decision rule. The experimental results of the proposed method are compared with the manually labeled segmentation by a neuroradiologist for abnormal tissues and with anatomic model of BrainWeb for normal tissues. The MRI images used in our experiment are imaged with a 1.5T GE for abnormal brain, with 3D MRI simulated brain database for normal brain by using an axial 3D IR T1-weighted (TI/TR/TE: 600/10/2), an axial FSE T2-weighted(TR/TE: 3500/102) and an axial FSE PD weighted (TR/TE: 3500/11). Based on 4 patients studied, the average probability of false detection of abnormal tissues is 5%. For the normal tissues, a false detection rate of 4% - 15% is obtained in images with 3% - 7% noise level. All of them show a good performance for our method.
Magnetic resonance image analysis by computer is useful to aid diagnosis of malady. We present in this paper a automatic segmentation method for principal brain tissues. It is based on the possibilistic clustering approach, which is an improved fuzzy c-means clustering method. In order to improve the efficiency of clustering process, the initial value problem is discussed and solved by combining with a histogram analysis method. Our method can automatically determine number of classes to cluster and the initial values for each class. It has been tested on a set of forty MR brain
images with or without the presence of tumor. The experimental results showed that it is simple, rapid and robust to segment the principal brain tissues.
Level set methods offer a powerful approach for the medical image segmentation since it can handle any of the cavities, concavities, convolution, splitting or merging. However, this method requires specifying initial curves and can only provide good results if these curves are placed near symmetrically with respect to the object boundary. Another well known segmentation technique - morphological watershed transform can segment unique boundaries from an image, but it is very sensitive to small variations of the image magnitude and consequently the number of generated regions is undesirably large and the segmented boundaries is not smooth enough. In this paper, a hybrid 3D medical image segmentation algorithm, which combines the watershed transform and level set techniques, is proposed. This hybrid algorithm resolves the weaknesses of each method. An initial partitioning of the image into primitive regions is produced by applying the watershed transform on the image gradient magnitude, then this segmentation results is treated as the initial localization of the desired contour, and used in the following level set method, which provides closed, smoothed and accurately localized contours or surfaces. Experimental results are also presented and discussed.
This paper presents a fuzzy information fusion method to automatically extract tumor areas of human brain from multispectral magnetic resonance (MR) images. The multispectral images consist of T1 -weighted (T1), proton density (PD), and 12-weighted (T2) feature images, in which signal intensities of a tumor are different. Some tissue is more visible in one image type than the others. The fusion of information is therefore necessary. Our method, based on the fusion of information, model the fuzzy information about the tumor by membership functions. Thismodelisation is based on the a priori knowledge of radiology experts and the MR signals of the brain tissues. Three membership functions related to the three images types are proposed according to their characteristics. The brain extraction is then carried out by using the fusion of all three fuzzy information. The experimental results (based on 5 patients studied) show a mean false-negative of 2% and a mean false-positive of 1 .3%, comparing to the results obtained by a radiology using manual tracing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.