KEYWORDS: Digital breast tomosynthesis, Education and training, Denoising, Data modeling, Breast, Visualization, Tunable filters, Image filtering, Medical imaging, Image quality
PurposeImage denoising based on deep neural networks (DNN) needs a big dataset containing digital breast tomosynthesis (DBT) projections acquired in different radiation doses to be trained, which is impracticable. Therefore, we propose extensively investigating the use of synthetic data generated by software for training DNNs to denoise DBT real data.ApproachThe approach consists of generating a synthetic dataset representative of the DBT sample space by software, containing noisy and original images. Synthetic data were generated in two different ways: (a) virtual DBT projections generated by OpenVCT and (b) noisy images synthesized from photography regarding noise models used in DBT (e.g., Poisson–Gaussian noise). Then, DNN-based denoising techniques were trained using a synthetic dataset and tested for denoising physical DBT data. Results were evaluated in quantitative (PSNR and SSIM measures) and qualitative (visual analysis) terms. Furthermore, a dimensionality reduction technique (t-SNE) was used for visualization of sample spaces of synthetic and real datasets.ResultsThe experiments showed that training DNN models with synthetic data could denoise DBT real data, achieving competitive results to traditional methods in quantitative terms but showing a better balance between noise filtering and detail preservation in a visual analysis. T-SNE enables us to visualize if synthetic and real noises are in the same sample space.ConclusionWe propose a solution for the lack of suitable training data to train DNN models for denoising DBT projections, showing that we just need the synthesized noise to be in the same sample space as the target image.
Digital Breast Tomosynthesis (DBT) projections are acquired with a high level of noise, compared to Digital Mammography (DM) projections. Noise reduction in DBT projections is important because the projections are obtained with low radiation dose, elevating the noise level. In this way, noise reduction is essential to improve the quality of DBT exam. Recently, neural network based methods have been applied to denoise DBT projections, reaching remarkable results. Some papers have been published showing that these methods are able to overpass traditional methods’ results, but we could not find a paper comparing the different types of networks to denoise DBT projections. In this paper, we proposed an experiment to compare neural network based methods, with different architecture types, and traditional methods. We performed a comparison among five traditional non blind denoising methods and six neural network models. Considering both quantitative and qualitative analysis, we found that some neural network models achieve remarkable results, especially shallower models.
New deep-learning architectures are created every year, achieving state-of-the-art results in image recognition and leading to the belief that, in a few years, complex tasks such as sign language translation will be considerably easier, serving as a communication tool for the hearing-impaired community. On the other hand, these algorithms still need a lot of data to be trained and the dataset creation process is expensive, time-consuming, and slow. Thereby, this work aims to investigate techniques of digital image processing and machine learning that can be used to create a sign language dataset effectively. We argue about data acquisition, such as the frames per second rate to capture or subsample the videos, the background type, preprocessing, and data augmentation, using convolutional neural networks and object detection to create an image classifier and comparing the results based on statistical tests. Different datasets were created to test the hypotheses, containing 14 words used daily and recorded by different smartphones in the RGB color system. We achieved an accuracy of 96.38% on the test set and 81.36% on the validation set containing more challenging conditions, showing that 30 FPS is the best frame rate subsample to train the classifier, geometric transformations work better than intensity transformations, and artificial background creation is not effective to model generalization. These trade-offs should be considered in future work as a cost-benefit guideline between computational cost and accuracy gain when creating a dataset and training a sign recognition model.
The Digital Breast Tomosynthesis (DBT) projections are obtained with low quality, being essential to use denoising methods to increase the quality of the projections. Currently, deep learning methods have become the state-of-art approach in denoising. Some papers have proposed to apply deep learning methods for denoising DBT projections, however, there is a lack of clarity in the results comparing with traditional methods. In this paper, we proposed to use a CNN to denoise DBT projections, and compare it with traditional denoising methods. The results shown that the CNN is superior quantitatively and qualitatively in comparison with the traditional methods.
KEYWORDS: Digital breast tomosynthesis, Image restoration, Denoising, Computer simulations, CT reconstruction, Signal to noise ratio, Reconstruction algorithms, Evolutionary algorithms, Digital imaging, Computed tomography
The Filtered Backprojection (FBP) algorithm for Computed Tomography (CT) reconstruction can be mapped entire in an Artificial Neural Network (ANN), with the backprojection (BP) operation simulated analytically in a layer and the Ram-Lak filter simulated as a convolutional layer. Thus, this work adapts the BP layer for Digital Breast Tomosynthesis (DBT) reconstruction, making possible the use of FBP simulated as an ANN to reconstruct DBT images. We showed that making the Ram-Lak layer trainable, the reconstructed image can be improved in terms of noise reduction. Finally, this study enables additional proposals of ANN with Deep Learning models for DBT reconstruction and denoising.
Noise is an intrinsic property of every imaging system. For imaging systems using ionizing radiation, such as digital breast tomosynthesis (DBT) or digital mammography (DM), we strive to ensure that x-ray quantum noise is the limiting noise source in images, while using the lowest radiation dose possible to achieve clinically satisfactory images. Therefore, new computer methods are being sought to help reduce the dose of these systems. In the case of DBT, this can be achieved when solving the inverse problem of tomographic reconstruction. In this work, we propose to use a Non-Local Gaussian Markov Random Field (NLGMRF) model to represent a priori knowledge in a Bayesian (Maximum a Posteriori - MAP) reconstruction approach for DBT. The main advantage of the Non-Local Markov Random Field models is that they explicitly consider two important constraints to regularize the solution of this inverse problem - smoothing and redundancy. To evaluate this new method in DBT, a number of experiments were performed to compare these methods to existing reconstruction techniques. Comparable or superior results were achieved when compared with methods in the DBT reconstruction literature in terms of structural similarity index (SSIM), artifact spread function (ASF) and visual analysis, demonstrating that the NLGMRF model is suitable to regularize the MAP solution in DBT reconstruction.
KEYWORDS: Digital breast tomosynthesis, Denoising, Image filtering, Reconstruction algorithms, Data modeling, Digital imaging, Electronic filtering, Filtering (signal processing), Gaussian filters, 3D image reconstruction
Digital breast tomosynthesis (DBT) is an imaging technique created to visualize 3-D mammary structures for the purpose of diagnosing breast cancer. This imaging technique is based on the principle of computed tomography. Due to the use of a dangerous ionizing radiation, the “as low as reasonably achievable” (ALARA) principle should be respected, aiming at minimizing the radiation dose to obtain an adequate examination. Thus, a noise filtering method is a fundamental step to achieve the ALARA principle, as the noise level of the image increases as the radiation dose is reduced, making it difficult to analyze the image. In our work, a double denoising approach for DBT is proposed, filtering in both projection (prereconstruction) and image (postreconstruction) domains. First, in the prefiltering step, methods were used for filtering the Poisson noise. To reconstruct the DBT projections, we used the filtered backprojection algorithm. Then, in the postfiltering step, methods were used for filtering Gaussian noise. Experiments were performed on simulated data generated by open virtual clinical trials (OpenVCT) software and on a physical phantom, using several combinations of methods in each domain. Our results showed that double filtering (i.e., in both domains) is not superior to filtering in projection domain only. By investigating the possible reason to explain these results, it was found that the noise model in DBT image domain could be better modeled by a Burr distribution than a Gaussian distribution. Finally, this important contribution can open a research direction in the DBT denoising problem.
Currently, the state-of-the art methods for image denoising are patch-based approaches. Redundant information present in nonlocal regions (patches) of the image is considered for better image modeling, resulting in an improved quality of filtering. In this respect, nonlocal Markov random field (MRF) models are proposed by redefining the energy functions of classical MRF models to adopt a nonlocal approach. With the new energy functions, the pairwise pixel interaction is weighted according to the similarities between the patches corresponding to each pair. Also, a maximum pseudolikelihood estimation of the spatial dependency parameter (β) for these models is presented here. For evaluating this proposal, these models are used as an a priori model in a maximum a posteriori estimation to denoise additive white Gaussian noise in images. Finally, results display a notable improvement in both quantitative and qualitative terms in comparison with the local MRFs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.