Compressed sensing (CS) computed tomography has been proven to be important for several clinical applications, such as sparse-view computed tomography (CT), digital tomosynthesis and interior tomography. Traditional compressed sensing focuses on the design of handcrafted prior regularizers, which are usually image-dependent and time-consuming. Inspired by recently proposed deep learning-based CT reconstruction models, we extend the state-of-the-art LEARN model to a dual-domain version, dubbed LEARN++. Different from existing iteration unrolling methods, which only involve projection data in the data consistency layer, the proposed LEARN++ model integrates two parallel and interactive subnetworks to perform image restoration and sinogram inpainting operations on both the image and projection domains simultaneously, which can fully explore the latent relations between projection data and reconstructed images. The experimental results demonstrate that the proposed LEARN++ model achieves competitive qualitative and quantitative results compared to several state-of-the-art methods in terms of both artifact reduction and detail preservation.
As a quantitative CT imaging technique, the dual-energy CT (DECT) imaging method attracts a lot of research interests. However, material decomposition from high energy (HE) and low energy (LE) data may suffer from magnified noise, resulting in severe degradation of image quality and decomposition accuracy. To overcome these challenges, this study presents a novel DECT material decomposition method based on deep neural network (DNN). In particular, this new DNN integrates the CT image reconstruction task and the nonlinear material decomposition procedures into one single network. This end-to-end network consists of three compartments: the sinogram domain decomposition compartment, the user-defined analytical domain transformation operation (OP) compartment, and the image domain decomposition compartment. By design, both the first and third compartments are responsible for complicated nonlinear material decomposition, while denoising the DECT images. Natural images are used to synthesized the dual-energy data with assumed certain volume fractions and density distributions. By doing so, the burden of collecting clinical DECT data can be significantly reduced, therefore the new DECT reconstruction framework becomes more easy to be implemented. Both numerical and experimental validation results demonstrate that the proposed DNN based DECT reconstruction algorithm can generate high quality basis images with improved accuracy.
In a standard computed tomography (CT) image, pixels having the same Hounsfield Units (HU) can correspond to different materials and it is therefore challenging to differentiate and quantify materials. Dual-energy CT (DECT) is desirable to differentiate multiple materials, but DECT scanners are not widely available as singleenergy CT (SECT) scanners. Here we develop a deep learning approach to perform DECT imaging by using standard SECT data. The end point of the deep learning approach is a model capable of providing the high-energy CT image for a given input low-energy CT image. We retrospectively studied 22 patients who received contrast-enhanced abdomen DECT scan. The difference between the predicted and original high-energy CT images are 3.47 HU, 2.95 HU, 2.38 HU, and 2.40 HU for spine, aorta, liver and stomach, respectively. The difference between virtual non-contrast (VNC) images obtained from original DECT and deep learning DECT are 4.10 HU, 3.75 HU, 2.33 HU and 2.92 HU for spine, aorta, liver and stomach, respectively. The aorta iodine quantification difference between iodine maps obtained from original DECT and deep learning DECT images is 0.9%. This study demonstrates that highly accurate DECT imaging with single low-energy data is achievable by using a deep learning approach. The proposed method can significantly simplify the DECT system design, reducing the scanning dose and imaging cost.
The X-ray computer tomography (CT) scanner has been extensively used in medical diagnosis. How to reduce radiation dose exposure while maintain high image reconstruction quality has become a major concern in the CT field. In this paper, we propose a statistical iterative reconstruction framework based on structure tensor total variation regularization for low dose CT imaging. An accelerated proximal forward-backward splitting (APFBS) algorithm is developed to optimize the associated cost function. The experiments on two physical phantoms demonstrate that our proposed algorithm outperforms other existing algorithms such as statistical iterative reconstruction with total variation regularizer and filtered back projection (FBP).
When the scan field of view (SFOV) of a CT system is not large enough to enclose the entire cross-section
of a patient, or the patient needs to be intentionally positioned partially outside the SFOV for certain clinical
CT scans, truncation artifacts are often observed in the reconstructed CT images. Conventional wisdom to
reduce truncation artifacts is to complete the truncated projection data via data extrapolation with different
a priori assumptions. This paper presents a novel truncation artifact reduction method that directly works
in the CT image domain. Specifically, a discriminative dictionary that includes a sub-dictionary of truncation
artifacts and a sub-dictionary of non-artifact image information was used to separate a truncation artifact-contaminated
image into two sub-images, one with reduced truncation artifacts, and the other one containing
only the truncation artifacts. Both experimental phantom and retrospective human subject studies have been
performed to characterize the performance of the proposed truncation artifact reduction method.
Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.
Cardiac computed tomography (CCT) has been widely used in diagnoses of coronary artery diseases due to the continuously improving temporal and spatial resolution. When helical CT with a lower pitch scanning mode is used, the effective radiation dose can be significant when compared to other radiological exams. Many methods have been developed to reduce radiation dose in coronary CT exams including high pitch scans using dual source CT scanners and step-and-shot scanning mode for both single source and dual source CT scanners. Additionally, software methods have also been proposed to reduce noise in the reconstructed CT images and thus offering the opportunity to reduce radiation dose while maintaining the desired diagnostic performance of a certain imaging task. In this paper, we propose that low-dose scans should be considered in order to avoid the harm from accumulating unnecessary X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. Accordingly, in this paper, a 3D dictionary representation based image processing method is proposed to reduce CT image noise. Information on both spatial and temporal structure continuity is utilized in sparse representation to improve the performance of the image processing method. Clinical cases were used to validate the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.