In recent years, it has been shown that it is possible to achieve time-resolved cone-beam CT imaging using a Carm cone-beam CT system equipped with a back-and-forth multi-sweep data acquisition mode. A reconstruction algorithm which has been referred to as SMART-RECON has demonstrated to achieve better than 1.0 frame per second (fps) temporal resolution. Recently, another innovation was introduced into SMART-RECON to achieve 7.5 fps, this innovation opens up a whole new opportunity to extract other physiological information besides the well-known cone-beam CT perfusion. In this work, we study the feasibility to obtain quantitative blood flow information from high temporal resolution time-resolved cone beam CT angiography. This new possibility would enable physicians to more accurately pinpoint occlusion sites and provide the needed image guidance to plan for endovascular therapy in acute ischemic stroke patients as an example. Numerical simulations with ground truth and preliminary clinical case studies have been conducted to demonstrate the feasibility of blood flow quantification from time-resolved CBCT angiography.
A deep learning Generative Adversarial Networks (GANs) were developed and validated to provide an accurate way of direct NPS estimation from a single patient CT scan. GANs were utilized to map a white noise input to a CT noise realization with correct CT noise correlations specific to a single local uniform ROI. To achieve this, a two-stage strategy was developed. In the pre-training stage, ensembles of 64x64 MBIR noise-only images of a quality assurance phantom were used as training samples to jointly train the generator and discriminator. They were fined-tuned using training samples from a single 101x101 ROI of an abdominal anthropomorphic phantom. Results from GANs and physical scans were compared in terms of its mean frequency and radial averaged NPS. This workflow was extended to a patient case where reference dose and 25% of reference dose CT scans were provided for fine-tuning. GANs generated noise-only image samples that are indistinguishable from physical measurement. The overall mean frequency discrepancy between NPS generated from GANs and those from physically acquired data was 0.2% for anthropomorphic phantom validation. The KL divergence for 1D radial averaged NPS profile of these two NPS acquisitions was 2.2×10^(-3). Statistical test indicates trained GANs generated equivalent NPS to physical scans. In clinical patient-specific NPS studies, it showed a distinction between the reference dose case and 25% of reference dose case. It was demonstrated the GANs characterized the properties of CT noise in terms of its mean frequency and 1D NPS profile shape.
KEYWORDS: Computed tomography, Radiography, Neural networks, CT reconstruction, 3D modeling, 3D metrology, Diagnostics, 3D image processing, 3D image reconstruction, 3D imaging standards
In this work, a deep neural network architecture was developed and trained to reconstruct volumetric CT images from two-view radiograph scout localizers. In clinical CT exams, each patient will receive a two-view scout scan to generate both lateral (LAT) and anterior-posterior (AP) radiographs to help CT technologist to prescribe scanning parameters. After that, patients go through CT scans to generate CT images for clinical diagnosis. Therefore, for each patient, we will have two-view radiographs as input data set and the corresponding CT images as output to form our training data set. In this work, more than 1.1 million diagnostic CT images and their corresponding projection data from 4214 clinically indicated CT studies were retrospectively collected. The dataset was used to train a deep neural network which inputs the AP and LAT projections and outputs a volumetric CT localizer. Once the model was trained, 3D localizers were reconstructed for a validation cohort and results were analyzed and compared with the standard MDCT images. In particular, we were interested in the use of 3D localizers for the purpose of optimizing tube current modulation schemes, therefore we compared water equivalent diameters (Dw), radiologic paths and radiation dose distributions. The quantitative evaluation yields the following results: The mean±SD percent difference in Dw was 0.6±4.7% in 3D localizers compared to the Dw measured from the conventional CT reconstructions. 3D localizers showed excellent agreement in radiologic path measurements. Gamma analysis of radiation dose distributions indicated a 97.3%, 97.3% and 98.2% of voxels with passing gamma index for anatomical regions in the chest, abdomen and pelvis respectively. These results demonstrate the great success of the developed deep learning reconstruction method to generate volumetric scout CT image volumes.
KEYWORDS: Data acquisition, Image restoration, Image processing, Human subjects, Computed tomography, Data processing, X-ray computed tomography, Scanners, Lithium, Medical diagnostics
Image reconstruction from line integrals is one of foundations in computed tomography (CT) for medical diagnosis and non-destructive detection purpose. To accurately recover the density function from measurements taken over straight lines, analytic-formula-based or optimization-based inversions have been discovered over the past several decades. Accurate image reconstruction can be achieved if the acquired dataset satisfies data sufficiency conditions and data consistency conditions. However, if these conditions are violated, accurate image reconstruction remains an intellectual challenge provided that significant a priori information about image object and/or physical process of data acquisition need to be incorporated. In this work, we show that a deep learning method based upon a brand new network architecture, termed intelligent CT neural network (iCT-Net), can be employed to discover accurate image reconstruction solutions from fully-truncated and sparsely-sampled line integrals without explicit incorporations of a priori information of either image object or data acquisition process. After a two-stage training, the trained iCT-Net was directly applied to real human subject data to demonstrate the generalizability of iCT-Net to experimental data.
Current clinical 3D-DSA requires the acquisition of two image volumes, before and after the injection of contrast media (i.e. mask and fill scans). Deep learning angiography (DLA) is a recently developed technique that enables the generation of mask-free 3D angiography using convolutional neural networks (CNN). In this work, the quantitative performance of DLA as a function of the number of layers in the deep neural network and the DLA inference computation time are investigated. Clinically indicated rotational angiography exams of 105 patients scanned with a C-arm conebeam CT system using a standard 3D-DSA imaging protocol for the assessment of cerebrovascular abnormalities were retrospectively collected. More than 185 million labeled voxels from contrast-enhanced images of 43 subjects were used as training and testing dataset. Multiple deep CNNs were trained to perform DLA. The trained DLA models were then applied in a validation cohort consisting of the remaining image volumes from 62 subjects and accuracy, sensitivity, precision and F1-scores were calculated for vasculature classification in relevant anatomy. The implementation of the best performing model was optimized for accelerated DLA inference and the computation time was measured under multiple hardware configurations.
Vasculature classification accuracy and 95% CI in the validation dataset were 98.7% ([98.3, 99.1] %) for the best performing model. DLA inference user time was 17 seconds for a throughput of 23 images/s. In conclusion, a 30-layer DLA model outperformed shallower networks and DLA inference computation time was demonstrated not be a limiting factor for current clinical practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.