The Channelized Hotelling observer (CHO) is well correlated with human observer performance in many CT detection/classification tasks but has not been widely adopted in routine CT quality control and performance evaluation, mainly because of the lack of an easily available, efficient, and validated software tool. We developed a highly automated solution–CT image quality evaluation and Protocol Optimization (CTPro), a web-based software platform that includes CHO and other traditional image quality assessment tools such as modulation transfer function and noise power spectrum. This tool can allow easy access to the CHO for both the research and clinical community and enable efficient, accurate image quality evaluation. An example of its application to evaluating the low-contrast performance of a photon-counting-detector CT with varying scan modes, image types and reconstruction methods was demonstrated using this web platform.
Energy-integrating detectors used in diagnostic CT consist of a scintillator that converts x-rays into visible light, and reflective septa that channel the produced visible light and reduce crosstalk between pixels. The reflective septa reduce the fill factor and dose efficiency of CT. It is desirable to reduce the size of the reflective septa, but for mechanical and optical reasons, such size reductions are not easily achieved. For this reason, we propose an alternative strategy: loading the septa with a high-Z material that can fluoresce with the incident x-rays and reemit characteristic x-rays, some of which would be absorbed in adjacent scintillator pixels. We model this approach using Monte Carlo simulations and show that approximately half of the area lost to reflective septa can be recovered with loading of Gd at a density of 1g/cc of Gd. We show that this can translate into higher resolution detectors: in the absence of an anti-scatter grid and assuming a reflective septa thickness of 0.1mm, dose efficiency can be better preserved even as the spatial resolution increases. When moving from a 1.0mm to 0.5mm pixel pitch, the expected reduction in dose efficiency is 26% without x-ray fluorescence loading. With x-ray fluorescence loading, the reduction in dose efficiency is only 4%.
The spatial resolution of photon counting detectors (PCDs) can be further improved if photons that contribute to charge sharing (leading to counts in two adjacent pixels) were detected and binned separately. These events are captured in a coincidence counting architecture, previously proposed for improving spectral imaging performance. Here, we examine the problem of using coincidence counts in a non-spectral application for the purpose of improving spatial resolution. With typical PCD parameters, the number of photons that lead to coincidence counts is 3 times fewer than photons that lead to a single count only. Effectively, the detector has an alternating aperture: channels of the detector alternate between being wide and narrow. An analytic reconstruction algorithm that simply normalizes against the air scan will be inefficient at low spatial frequencies because the narrow detector channels contribute equal weight to reconstruction but have worse statistical information. We propose decomposing the sinogram into the narrow and wide apertures components separately, zero-stuffing them, and applying frequency weights so that the detective quantum efficiency at low spatial frequencies is restored, while information content at high spatial frequencies can be accessed. The frequency weighted sinograms are then summed and backprojected. We test this approach in simulations with various detector models on numerical phantoms.
Generating realistic radiographs from CT is mainly limited by the native spatial resolution of the latter. Here we present a general approach for synthesizing high-resolution digitally reconstructed radiographs (DRRs) from an arbitrary resolution CT volume. Our approach is based on an upsampling framework where tissues of interest are first segmented from the original CT volume and then upsampled individually to the desired voxelization (here ∼1 mm → 0.2 mm). Next, we create high-resolution 2D tissue maps by cone-beam projection of individual tissues in the desired radiography direction. We demonstrate this approach on a coronary artery calcium (CAC) patient CT scan and show that our approach preserves individual tissue volumes, yet enhances the tissue interfaces, creating a sharper DRR without introducing artificial features. Lastly, we model a dual-layer detector to synthesize high-resolution dual-energy (DE) anteroposterior and lateral radiographs from the patient CT to visualize the CAC in 2D through material decomposition. On a general level, we envision that this approach is valuable for creating libraries of synthetic yet realistic radiographs from corresponding large CT datasets.
Deep learning-based image reconstruction and noise reduction (DLIR) methods have been increasingly deployed in clinical CT. Accurate assessment of their data uncertainty properties is essential to understand the stability of DLIR in response to noise. In this work, we aim to evaluate the data uncertainty of a DLIR method using real patient data and a virtual imaging trial framework and compare it with filtered-backprojection (FBP) and iterative reconstruction (IR). The ensemble of noise realizations was generated by using a realistic projection domain noise insertion technique. The impact of varying dose levels and denoising strengths were investigated for a ResNet-based deep convolutional neural network (DCNN) model trained using patient images. On the uncertainty maps, DCNN shows more detailed structures than IR although its bias map has less structural dependency, which implies that DCNN is more sensitive to small changes in the input. Both visual examples and histogram analysis demonstrated that hotspots of uncertainty in DCNN may be associated with a higher chance of distortion from the truth than IR, but it may also correspond to a better detection performance for some of the small structures.
KEYWORDS: Data modeling, Monte Carlo methods, Denoising, Deep learning, Calibration, Reliability, Convolutional neural networks, Correlation coefficients
Assessing the reliability of convolutional neural network (CNN)-based CT imaging techniques is critical for reliable deployment in practice. Some evaluation methods exist but require full access to target CNN architecture and training data, something not available for proprietary or commercial algorithms. Moreover, there is a lack of systematic evaluation methods. To address these issues, we propose a patient-specific uncertainty and bias quantification (UNIQ) method that integrates knowledge distillation and Bayesian deep learning. Knowledge distillation creates a transparent CNN (“Student CNN”) to approximate the target non-transparent CNN (“Teacher CNN”). Student CNN is built as a Bayesian-deep-learning-based probabilistic CNN that, for each input, always generates statistical distribution of the corresponding outputs, and characterizes predictive mean and two major uncertainties – data and model uncertainty. UNIQ was evaluated using a low-dose CT denoising task. Patient and phantom scans with routine-dose and synthetic quarter-dose were used to create training, validation, and testing sets. To demonstrate, Unet and Resnet were used as backbones of Teacher CNN and Student CNN respectively and were trained using independent training sets. Student Resnet was qualitatively and quantitatively evaluated. The pixel-wise predictive mean, data uncertainty, and model uncertainty from Student Resnet were very similar to the counterparts from Teacher Unet (mean-absolute-error: predictive mean 1.5HU, data uncertainty 1.8HU, model uncertainty 1.3HU; mean 2D correlation coefficient: total uncertainty 0.90, data uncertainty 0.86, model uncertainty 0.83). The proposed UNIQ can potentially systematically characterize the reliability of non-transparent CNN models used in CT.
For the detection of very small objects, high resolution detectors are expected to provide higher dose efficiency. We assessed this impact of increased resolution on a clinical photon counting detector CT (PCD-CT) by comparing its detectability in high resolution and standard resolution (with 2x2 binning and larger focal spot) modes. A 50𝜇𝑚-thin metal wire was placed in a thorax phantom and scanned in both modes at three exposure levels (12, 15, and 18 mAs); acquired data were reconstructed with three reconstruction kernels (Br40, Br68, and Br76, from smooth to sharp). A scanning nonprewhitening model observer searched for the wire location within each slice independently. Detection performance was quantified as area under the exponential transform of the free response ROC curve. The high-resolution mode had the mean AUCs at 18 mAs of 0.45, 0.49, and 0.65 for Br40, Br68, and Br76, respectively, which were 2 times, 3.6 times, and 4.6 times those of the standard resolution mode. The high-resolution mode achieved greater AUC at 12 mAs than the standard resolution mode at 18 mAs for every reconstruction kernel, but improvements were larger at sharper kernels. The results are consistent with the greater suppression of noise aliasing expected at higher frequencies with high resolution CT. This work illustrates that PCD-CT can provide large dose efficiency gains for detection tasks of small, high contrast lesions.
The performance of a CT scanner for detectability tasks is difficult to precisely measure. Metrics such as contrast-to-noise ratio, modulation transfer function, and noise power spectrum do not predict detectability in the context of nonlinear reconstruction. We propose to measure detectability using a dense search challenge: a phantom is embedded with hundreds of target objects at random locations, and a human or numerical observer analyzes the reconstruction and reports on suspected locations of all target objects. The reported locations are compared to ground truth to produce a figure of merit, such as area under the curve (AUC), that is sensitive to the acquisition dose and the dose efficiency of the CT scanner. We used simulations to design such a dense search challenge phantom and found that detectability could be measured with precision better than 5%. Test 3D prints using the PixelPrint technique showed the feasibility of this technique.
One of the challenges for photon counting detector (PCD)-based computed tomography (CT) is spectral distortion due to charge sharing (CS). We recently proposed multi-energy inter-pixel coincidence counter (MEICC), which uses energy-dependent coincidence counters (CCs) between each PCD pixel and its neighboring pixels and records coincident events via charge sharing during data acquisition. Previous studies have shown that the performance of MEICC is as good as other technologies including digital count summing (DCS) scheme. In this study, we develop an algorithm that uses MEICC PCD output to create CS-corrected data. The performance of the method is assessed using a Monte Carlo (MC) simulation.
In conventional tomosynthesis, the x-ray source or detector move relative to the patient so that anatomy at a target depth is focused and other anatomy is blurred. We propose a real-time single frame tomosynthesis design using a distributed source array and a large flat-panel detector. Each element in the source array energizes simultaneously, and the beam is collimated down so that it passes through isocenter and is received in a small sector of the detector. The detector receives multiple non-overlapping x-ray images simultaneously, and averages these to blur anatomy outside the target depth. Reconstruction occurs at the readout rate of the detector, typically 30 frames per second. Single frame tomosynthesis therefore increases temporal resolution at the expense of field of view and number of views. An application of single frame tomosynthesis is the monitoring of lung tumors during stereotactic body radiotherapy (SBRT). External biplane fluoroscopic systems, presently used for management of cranial lesions, could be repurposed with tomosynthesis at moderate cost. In a reader study with two radiation oncologists evaluating 60 simulated cases of lung SBRT, 90% were deemed acceptable for motion management with tomosynthesis compared to 53% with fluoroscopy. We constructed a prototype system using four portable x-ray sources and a fixed collimator and frame and imaged an anthropomorphic lung phantom with a spherical lung nodule embedded, and found that the prototype system showed displayed the lung nodule with better contrast than fluoroscopy.
Deep convolutional neural network (DCNN)-based noise reduction methods have been increasingly deployed in clinical CT. Accurate assessment of their spatial resolution properties is required. Spatial resolution is typically measured on physical phantoms, which may not represent the true performance of DCNN in patients as it is typically trained and tested with patient images and the generalizability of DNN to physical phantoms is questionable. In this work, we proposed a patient-data-based framework to measure the spatial resolution of DCNN methods, which involves lesion- and noise-insertion in projection domain, lesion ensemble averaging, and modulation transfer function measurement using an oversampled edge spread function from the cylindrical lesion signal. The impact of varying lesion contrast, dose levels, and CNN denoising strengths were investigated for a ResNet-based DCNN model trained using patient images. The spatial resolution degradation of DCNN reconstructions becomes more severe as the contrast or radiation dose decreased, or DCNN denoising strength increased. The measured 50%/10% MTF spatial frequencies of DCNN with highest denoising strength were (-500 HU:0.36/0.72 mm-1; -100 HU:0.32/0.65 mm-1; -50 HU:0.27/0.53 mm-1; -20 HU:0.18/0.36 mm-1; -10 HU:0.15/0.30 mm-1), while the 50%/10% MTF values of FBP were almost kept constant of 0.38/0.76 mm-1.
Coronary plaque risk classification in images acquired with photon-counting-detector (PCD) CT was performed using a radiomics-based machine learning (ML) model. With IRB approval, 17 coronary CTA patients were scanned on a PCD-CT (NAEOTOM Alpha, Siemens Healthineers) with median CTDIvol of 4.56 mGy. Four types of images: 120-kV PCD-CT image, virtual monoenergetic images (VMIs) at 50-keV and 100-keV, and iodine maps were reconstructed using an iterative reconstruction algorithm, a vascular kernel (Bv40) and 0.6-mm/0.4-mm slice thickness/increment. Atherosclerotic plaques were segmented using semi-automatic software (Research Frontier, Siemens). Segmentation confirmation and risk stratification (low- vs high-risk) were performed by a board-certified cardiac radiologist. A total of 1674 radiomic features were extracted from each image using PyRadiomics (v2.2.0b1). For each feature, a t-test was performed between low- and high-risk plaques (p<0.05 considered significant). Feature reduction was performed with a clustering algorithm and 6 non-redundant features were input into a linear support vector machine (SVM). A leave-one-out cross-validation strategy was adopted and the area under the ROC curve (AUC) was computed. Twelve low-risk and 5 high-risk plaques were identified by the radiologist. A total of 80, 66, 183, and 48 out of 1674 features in 120-kV, 50-keV, 100-keV, and iodine map images were statistically significant. The SVM classified 16/17 plaques correctly in the 120-kV PCD-CT and 50-keV VMI images. The AUC was 0.967, 0.967, 0.917, and 0.833 in 120-kV, 50-keV, 100-keV, and iodine map images, respectively. A ML model using coronary PCD-CTA images at 120-kV and 50-keV best automatically differentiated low- and high-risk coronary plaques.
Detection of low contrast liver metastases varies between radiologists. Training may improve performance for lower-performing readers and reduce inter-radiologist variability. We recruited 31 radiologists (15 trainees, eight non-abdominal staff, and eight abdominal staff) to participate in four separate reading sessions: pre-test, search training, classification training, and post-test. In the pre-test, each radiologist interpreted 40 liver CT exams containing 91 metastases, circumscribed suspected hepatic metastases while under eye tracker observation, and rated confidence. In search training, radiologists interpreted a separate set of 30 liver CT exams while receiving eye tracker feedback and after coaching to increase use of coronal reformations, interpretation time, and use of liver windows. In classification training, radiologists interpreted up to 100 liver CT image patches, most with benign or malignant lesions, and compared their annotations to ground truth. Post-test was identical to pre-test. Between pre- and post-test, sensitivity increased by 2.8% (p = 0.01) but AUC did not change significantly. Missed metastases were classified as search errors (<2 seconds gaze time) or classification errors (>2 seconds gaze time) using the eye tracker. Out of 2775 possible detections, search errors decreased (10.8% to 8.1%; p < 0.01) but classification errors were unchanged (5.7% vs 5.7%). When stratified by difficulty, easier metastases showed larger reductions in search errors: for metastases with average sensitivity of 0-50%, 50-90%, and 90-100%, reductions in search errors were 16%, 35%, and 58%, respectively. The training program studied here may be able to improve radiologist performance by reducing errors but not classification errors.
Purpose: Radiologists exhibit wide inter-reader variability in diagnostic performance. This work aimed to compare different feature sets to predict if a radiologist could detect a specific liver metastasis in contrast-enhanced computed tomography (CT) images and to evaluate possible improvements in individualizing models to specific radiologists.Approach: Abdominal CT images from 102 patients, including 124 liver metastases in 51 patients were reconstructed at five different kernels/doses using projection domain noise insertion to yield 510 image sets. Ten abdominal radiologists marked suspected metastases in all image sets. Potentially salient features predicting metastasis detection were identified in three ways: (i) logistic regression based on human annotations (semantic), (ii) random forests based on radiologic features (radiomic), and (iii) inductive derivation using convolutional neural networks (CNN). For all three approaches, generalized models were trained using metastases that were detected by at least two radiologists. Conversely, individualized models were trained using each radiologist’s markings to predict reader-specific metastases detection.Results: In fivefold cross-validation, both individualized and generalized CNN models achieved higher area under the receiver operating characteristic curves (AUCs) than semantic and radiomic models in predicting reader-specific metastases detection ability (p < 0.001). The individualized CNN with an AUC of mean (SD) 0.85(0.04) outperformed the generalized one [AUC = 0.78 ( 0.06 ) , p = 0.004]. The individualized semantic [AUC = 0.70 ( 0.05 ) ] and radiomic models [AUC = 0.68 ( 0.06 ) ] outperformed the respective generalized versions [semantic AUC = 0.66 ( 0.03 ) , p = 0.009; radiomic AUC = 0.64 ( 0.06 ) , p = 0.03].Conclusions: Individualized models slightly outperformed generalized models for all three feature sets. Inductive CNNs were better at predicting metastases detection than semantic or radiomic features. Generalized models have implementation advantages when individualized data are unavailable.
This study introduces a framework to approximate the bias inflicted by CNN noise reduction of CT exams. First, CNN noise reduction was used to approximate the noise-free image and noise-only image of a CT scan. The noise and signal were then recombined with spatial decoupling to simulate an ensemble of 100 images. CNN noise reduction was applied to the simulated ensemble and pixel-wise bias calculated. This bias approximation technique was validated within natural images and phantoms. The technique was then tested on ten whole-body-low-dose CT (WBLD-CT) patient exams. Bias correction led to improved contrast of lung and bone structures.
Eye-tracking techniques can be used to understand the visual search process in diagnostic radiology. Nonetheless, most prior eye-tracking studies in CT only involved single cross-sectional images or video playback of the reconstructed volume and meanwhile applied strong constraints to reader-image interactivity, yielding a disconnection between the corresponding experimental setup and clinical reality. To overcome this limitation, we developed an eye-tracking system that integrates eye-tracking hardware with in-house-built image viewing software. This system enabled recording of radiologists’ real-time eye-movement and interactivity with the displayed images in clinically relevant tasks. In this work, the system implementation was demonstrated, and the spatial accuracy of eye-tracking data was evaluated using digital phantom images and patient CT angiography exam. The measured offset between targets and gaze points was comparable to that of many prior eye-tracking systems (The median offset: phantom – visual angle ~0.8°; patient CTA – visual angle ~0.7 – 1.3°). Further, the eye-tracking system was used to record radiologists’ visual search in a liver lesion detection task with contrast-enhanced abdominal CT. From the measured data, several variables were found to correlate with radiologists’ sensitivity, e.g., mean sensitivity of readers with longer interpretation time was higher than that of the others (88 ± 3% vs 78 ± 10%; p < 0.001). In summary, the proposed eye-tracking system has the potential of providing high-quality data to characterize radiologists’ visual-search process in clinical CT tasks.
We estimate the minimum SNR necessary for object detection in the projection domain. We assume there is a set of objects O and we study an ideal observer that sequentially compares each member of O to the null hypothesis. This reduces to one-dimensional signal detection between two Gaussians. We find that for a search task of a circular 6 mm lesion in a region of interest 60 mm by 60 mm by 10 slices, and for a required sensitivity of 80% and specificity of 80%, the minimum required projection SNR is 5.1, a finding reminiscent of the Rose criterion.
Fluence field modulation (FFM) using dynamic pre-patient attenuators could reduce radiation dose while preserving image quality by optimizing the distribution of x-ray photons incident on the patient. However, past dynamic attenuators require mechanical action that is challenging to implement in diagnostic CT scanners and generate a large variety of system states that makes calibration complex. To circumvent these difficulties, we propose a motion-free mechanism for FFM that uses electromagnetic deflection of the focal spot, also called flying focal spot (FFS), together with interference patterns generated from fixed metal gratings. Our proposed design is digital in that only a limited number of fluence fields is possible, but the fluence field from each focal spot position is stable with respect to source perturbation, simplifying calibration. Intermediate fluence fields can be virtually achieved during reconstruction by using either statistical reconstruction or rebinning. We describe the geometric constraints for our FFM mechanism and illustrate some of the possible fluence fields that can be achieved.
There is substantial variability in the performance of radiologist readers. We hypothesized that certain readers may have idiosyncratic weaknesses towards certain types of lesions, and unsupervised learning techniques might identify these patterns. After IRB approval, 25 radiologist readers (9 abdominal subspecialists and 16 non-specialists or trainees) read 40 portal phase liver CT exams, marking all metastases and providing a confidence rating on a scale of 1 to 100. We formed a matrix of reader confidence ratings, with rows corresponding to readers, and columns corresponding to metastases, and each matrix entry providing the confidence rating that a reader gave to the metastasis, with zero confidence used for lesions that were not marked. A clustergram was used to permute the rows and columns of this matrix to group similar readers and metastases together. This clustergram was manually interpreted. We found a cluster of lesions with atypical presentation that were missed by several readers, including subspecialists, and a separate cluster of small, subtle lesions where subspecialists were more confident of their diagnosis than trainees. These and other observations from unsupervised learning could inform targeted training and education of future radiologists.
The diagnostic performance of radiologist readers exhibits substantial variation that cannot be explained by CT acquisition protocol differences. Studying reader detectability from CT images may help identify why certain types of lesions are missed by multiple or specific readers. Ten subspecialized abdominal radiologists marked all suspected metastases in a multi-reader-multi-case study of 102 deidentified contrast-enhanced CT liver scans at multiple radiation dose levels. A reference reader marked ground truth metastatic and benign lesions with the aid of histopathology or tumor progression on later scans. Multi-slice image patches and 3D radiomic features were extracted from the CT images. We trained deep convolutional neural networks (CNN) to predict whether an average (generalized) or individual radiologist reader would detect or miss a specific metastasis from an image patch containing it. The individualized CNN showed higher performance with an area under the receiver operating characteristic curve (AUC) of 0.82 compared to a generalized one (AUC = 0.78) in predicting reader-specific detectability. Random forests were used to build the respective versions from radiomic features. Both the individualized (AUC = 0.64) and generalized (AUC = 0.59) predictors from radiomic features showed limited ability to differentiate detected from missed lesions. This shows that CNN can identify and learn automated features that are better predictors of reader detectability of lesions than radiomic features. Individualized prediction of difficult lesions may allow targeted training of idiosyncratic weaknesses but requires substantial training data for each reader.
In this study, we describe a systematic approach to optimize deep-learning-based image processing algorithms using random search. The optimization technique is demonstrated on a phantom-based noise reduction training framework; however, the techniques described can be applied generally for other deep learning image processing applications. The parameter space explored included number of convolutional layers, number of filters, kernel size, loss function, and network architecture (either U-Net or ResNet). A total of 100 network models were examined (50 random search, 50 ablation experiments). Following the random search, ablation experiments resulted in a very minor performance improvement indicating near optimal settings were found during the random search. The top performing network architecture was a U-Net with 4 pooling layers, 64 filters, 3x3 kernel size, ELU activation, and a weighted feature reconstruction loss (0.2×VGG + 0.8×MSE). Relative to the low-dose input image, the CNN reduced noise by 90%, reduced RMSE by 34%, and increased SSIM by 76% on six patient exams reserved for testing. The visualization of hepatic and bone lesions was greatly improved following noise reduction.
Channelized Hotelling observer (CHO), which has been shown to be well correlated with human observer performance in many clinical CT tasks, has a great potential to become the method of choice for objective image quality assessment. However, the use of CHO in clinical CT is still quite limited, mainly due to its complexity in measurement and calculation in practice, and the lack of access to an efficient and validated software tool for most clinical users. In this work, a web-based software platform for CT image quality assessment and protocol optimization (CTPro) was introduced. A validated CHO tool, along with other common image quality assessment tools, was made readily accessible through this web platform for clinical users and researchers without the need of installing additional software. An example of its application to evaluation of convolutional-neural-network (CNN)-based denoising was demonstrated.
Adaptive radiotherapy is an effective procedure for the treatment of cancer, where the daily anatomical changes in the patient are quantified, and the dose delivered to the tumor is adapted accordingly. Deformable Image Registration (DIR) inaccuracies and delays in retrieving and registering on-board cone beam CT (CBCT) image datasets from the treatment system with the planning kilo Voltage CT (kVCT) have limited the adaptive workflow to a limited number of patients. In this paper, we present an approach for improving the DIR accuracy using a machine learning approach coupled with biomechanically guided validation. For a given set of 11 planning prostate kVCT datasets and their segmented contours, we first assembled a biomechanical model to generate synthetic abdominal motions, bladder volume changes, and physiological regression. For each of the synthetic CT datasets, we then injected noise and artifacts in the images using a novel procedure in order to mimic closely CBCT datasets. We then considered the simulated CBCT images for training neural networks that predicted the noise and artifact-removed CT images. For this purpose, we employed a constrained generative adversarial neural network, which consisted of two deep neural networks, a generator and a discriminator. The generator produced the artifact-removed CT images while the discriminator computed the accuracy. The deformable image registration (DIR) results were finally validated using the model-generated landmarks. Results showed that the artifact-removed CT matched closely to the planning CT. Comparisons were performed using the image similarity metrics, and a normalized cross correlation of >0.95 was obtained from the cGAN based image enhancement. In addition, when DIR was performed, the landmarks matched within 1.1 +/- 0.5 mm. This demonstrates that using an adversarial DNN-based CBCT enhancement, improved DIR accuracy bolsters adaptive radiotherapy workflow.
A dynamic prepatient attenuator can modulate flux in a computed tomography (CT) system along both fan and view angles for reduced dose, scatter, and required detector dynamic range. Reducing the dynamic range requirement is crucial for photon counting detectors. One approach, the piecewise-linear attenuator (Hsieh and Pelc, Med Phys 2013), has shown promising results both in simulations and an initial prototype. Multiple wedges, each covering a different fan angle range, are moved in the axial direction to change thickness seen in an axial slice. We report on an implementation of a filter with precision components and a control algorithm targeted for a tabletop system. Algorithms for optimizing wedge position and mA modulation and for correcting bowtie-specific beam-hardening are proposed. In experiments, the error between expected and observed bowtie transmission was ∼2 % on average and ∼7 % at maximum for a chest phantom. Within object boundaries, the observed flux dynamic ranges of 42 for a chest and 25 for an abdomen were achieved, corresponding to a reduction factor of 5 and 11 from the object scans without the bowtie. With beam hardening correction, the CT number in soft tissue regions was improved by 79 HU and deviated by 7 HU on average from clinical scanner CT images. The implemented piecewise-linear attenuator is able to dynamically adjust its thickness with high precision to achieve flexible flux control.
This work presents refraction-corrected sound speed reconstruction techniques for transmission-based ultrasound computed tomography using a circular transducer array. Pulse travel times between element pairs can be calculated from slowness (the reciprocal of sound speed) using the eikonal equation. Slowness reconstruction is posed as a nonlinear least squares problem where the objective is to minimize the error between measured and forward-modeled pulse travel times. The Gauss-Newton method is used to convert this problem into a sequence of linear least-squares problems, each of which can be efficiently solved using conjugate gradients. However, the sparsity of ray-pixel intersection leads to ill-conditioned linear systems and hinders stable convergence of the reconstruction. This work considers three approaches for resolving the ill-conditioning in this sequence of linear inverse problems: 1) Laplacian regularization, 2) Bayesian formulation, and 3) resolution-filling gradients. The goal of this work is to provide an open-source example and implementation of the algorithms used to perform sound speed reconstruction, which is currently being maintained on Github: https://github.com/ rehmanali1994/refractionCorrectedUSCT.github.io
Iterative coordinate descent (ICD) is an optimization strategy for iterative reconstruction that is sometimes considered incompatible with parallel compute architectures such as graphics processing units (GPUs). We present a series of modifications that render ICD compatible with GPUs and demonstrate the code on a diagnostic, helical CT dataset. Our reference code is an open-source package, FreeCT ICD, which requires several hours for convergence. Three modifications are used. First, as with our reference code FreeCT ICD, the reconstruction is performed on a rotating coordinate grid, enabling the use of a stored system matrix. Second, every other voxel in the z-is updated direction simultaneously, and the sinogram data is shuffled to coalesce memory access. This increases the parallelism available to the GPU. Third, NS voxels in the xy-plane are updated simultaneously. This introduces possible crosstalk between updated voxels, but because the interaction between non-adjacent voxels is small, small values of NS still converge effectively. We find NS = 16 enables faster reconstruction via greater parallelism, and NS = 256 remains stable but has no additional computational benefit. When tested on a pediatric dataset of size 736x16x14000 reconstructed to a matrix size of 512x512x128 on a single GPU, our implementation of ICD can converge within 10 HU RMS in less than 5 minutes. This suggests that ICD could be competitive with simultaneous update algorithms on modern, parallel compute architectures.
On-board 4D cone-beam CT (CBCT) imaging using a linear accelerator (LINAC) is recently favored scanning protocol in image guided radiotherapy, but it raises the problems of excessive radiation dose. Alternatively, the 4D digital tomosynthesis (DTS) has been introduced for small-sized target localization, such as pancreas, prostate, or partial breast scan, which does not require a full 3D information. However, conventional filtered back-projection (FBP) reconstruction method produces severe aliasing artifacts due to sparsely sampled projections measured in each respiration phase within a limited angle range. This effect is even more severe when the LINAC gantry sweep speed is too fast to sufficiently cover the respiratory gating phase. Previous studies on total-variation (TV) minimization-based reconstruction framework would be an alternative approach to solve this problem, however, it presents the loss of sharpness during the iterations. In this study, we adopted an adaptively-weighted TV (AwTV) scheme which penalizes the images after the TV optimization. We introduced a look-up table which contains all possible weighting parameters during each iteration step. As a result, the proposed AwTV method provided better image quality compared to the conventional FBP and adaptive steepest descent-projection onto convex set (ASD-POCS) frameworks, showing higher structural similarities (SSIM) by factor of 1.12 compared to FBP and less root-mean-square error (RMSEs) by factor of 1.06 compared to ASD-POCS. The horizontal line profile of the spherical target inserted in the moving phantom showed that the images from FBP and ASD-POCS provided severe aliasing artifact and smoothed pixel intensities, but proposed AwTV scheme reduces the aliasing artifact while maintaining the object’s sharpness. In conclusion, the proposed AwTV method has a potential for low-dose and faster 4D-DTS imaging, which indicates an alternative option to 4D-CBCT for small region target localization.
Patient respiration induces motion artifacts during cone-beam CT (CBCT) imaging in LINAC-based radiotherapy guidance. This could be relieved by retrospectively sorting the projection images, which could be called as a respiratorycorrelated CBCT or a four-dimensional (4D) CBCT imaging. However, the slowness of the LINAC head gantry rotation limits a rapid scan time so that 4D-CBCT usually involves large amounts of radiation dose. Digital tomosynthesis (DTS) which utilizes limited angular range would present a faster 4D imaging with much lower radiation dose. One drawback of 4D-DTS is strong streak artifacts represented in the tomosynthesis reconstructed images due to sparsely sampled projections in each phase bin. The authors suggest a fast low-dose 4D-DTS image reconstruction method in order to effectively reduce the artifacts in a sparse imaging tomosynthesis condition. We used a flat-panel detector to acquire tomosynthesis projections of respiratory moving phantom in anterior-posterior (AP) and lateral views. We entered a sinusoidal periodic respiratory motion for an input signal to the phantom. An external monitor of Varian real-time position management (RPM) system was used to estimate the input respiratory motion, thereby four respiratory gating phases were determined to retrospectively arrange the projections. For streak line reduction, we introduced a simple iterative scheme suggested by McKinnon and Bates (MKB) and regarded it as a prior input image of the proposed lowdose compressed sensing (CS) method. Three different 4D-DTS image reconstruction schemes of conventional Feldkamp (FDK), MKB, and MKB-CS were implemented to phase-wise projections of both AP and lateral views. All reconstructions were accelerated by using a GPU card to reduce the computation times. For assessment of algorithmic performance, we compared a streak reduction ratio (SRR) and a contrast-to-noise-ratio (CNR) among the outcome images. The results showed that SRRs for MKB and MKB-CS schemes were 0.24 and 0.69, respectively, which indicates that the proposed MKB-CS method provided smaller streaking artifacts than conventional one by factor of 2.88. The resulted CNRs of coronal tomosynthesis images at peak-inhale phase were 3.24, 6.36, and 10.56 for FDK, MKB, and MKB-CS, respectively. This shows that the proposed method provides better image quality compared to others. The reconstruction time for MKB-CS was 196.07 sec, showing that our GPU-accelerated programming would enhance the algorithmic performance to match clinically feasible times (~3 min). In conclusion, the proposed low-dose 4D-DTS reconstruction scheme may provide better outcomes than the conventional methods with fast speed, and could thus it could be applied in practical 4D imaging for radiotherapy.
A dynamic bowtie filter can modulate flux along both fan and view angles for reduced dose, scatter, and required detector dynamic range. Reducing the dynamic range requirement is crucial for photon counting detectors. One approach, the piecewise-linear attenuator (Hsieh and Pelc, Med Phys 2013), has shown promising results both in simulations and an initial prototype. Multiple wedges, each covering a different range of fan angle, are moved in the axial direction to change their attenuating thickness as seen in an axial slice. We report on an implementation of a filter with precision components and a control algorithm targeted for operation on a table-top system. Algorithms for optimizing wedge position and mA modulation and for correcting bowtie-specific beam-hardening artifacts are proposed. In experiments, the error between expected and observed bowtie transmission was ~2% on average and ~7% at maximum for a chest phantom. Within object boundaries, the observed flux dynamic ranges of 42, for a chest phantom, and 25, for an abdomen phantom were achieved, corresponding to a reduction factor of 5 and 11 from the object scans without the bowtie. With beam hardening correction, the mean CT number in soft tissue regions was improved by 79 HU on average, and deviated by 7 HU on average from clinical scanner CT images. The implemented piecewise-linear attenuator is able to dynamically adjust its thickness with high precision to achieve flexible flux control.
In large part from concerns about radiation exposure from computed tomography (CT), iterative reconstruction (IR) has emerged as a popular technique for dose reduction. Although IR clearly reduces image noise and improves resolution, its ability to maintain or improve low-contrast detectability over (possibly post-processed) filtered backprojection (FBP) reconstructions is unclear. In this work, we have scanned a low contrast phantom encased in an acrylic oval using two vendors’ scanners at 120 kVp at three dose levels for axial and helical acquisitions with and without automatic exposure control. Using the local noise power spectra of the FBP and IR images to guide the filter design, we developed a two-dimensional angularly-dependent Gaussian filter in the frequency domain that can be optimized to minimize the root-mean-square error between the image-domain filtered FBP and IR reconstructions. The filter is extended to three-dimensions by applying a through-slice Gaussian filter in the image domain. Using this three-dimensional, non-isotropic filtering approach on data with non-uniform statistics from both scanners, we were able to process the FBP reconstructions to closely match the low-contrast performance of IR images reconstructed from the same raw data. From this, we conclude that most or all of the benefits of noise reduction and low-contrast performance of advanced reconstruction can be achieved with adaptive linear filtering of FBP reconstructions in the image domain.
Quantitative imaging analysis has become a focus of medical imaging fields in recent days. In this study, Fourier-based imaging metrics for task-based quantitative assessment of lung nodules were applied in low-dose chest tomosynthesis. Compared to the conventional filtered back-projection (FBP), a compressed-sensing (CS) image reconstruction has been proposed for dose and artifact reduction. We implemented the CS-based low-dose reconstruction scheme to a sparsely sampled projection dataset and compared the lung nodule detectability index (d’) between the FBP and CS methods. We used the non-prewhitening (NPW) model observer to estimate the in-plane slice detectability in tomosynthesis and theoretically calculated d’ using the weighted amounts of local noise, spatial resolution, and task function in Fourier domain. We considered spatially varying noise and spatial resolution properties because the iterative reconstruction showed non-stationary characteristics. For analysis of task function, we adopted a simple binary hypothesis-testing model which discriminates outer and inner region of the encapsulated shape of lung nodule. The results indicated that the local noise power spectrum showed smaller intensities with increasing the number of projections, whereas the local transfer function provided similar appearances between the FBP and CS schemes. The resulted task functions for the same size of lung nodules showed the same pattern with different intensity, whereas the task function for different size of lung nodules presented different shapes due to different object functions. The theoretically calculated d’ values showed that the CS schemes provided higher values than the FBP method by factors of 2.64-3.47 and 2.50-3.10 for two different lung nodules among all projection views. This could demonstrate that the low-dose CS algorithm provide a comparable lung nodule images in comparison to FBP from 37.9% up to 28.8% reduced dose in the same projection views. Moreover, we observed that the CS method implemented with small number of projections provided similar or somewhat higher d’ values compared to the FBP method with large number of projections. In conclusion, the CS scheme may present a potential dose reduction for lung nodule detection in the chest tomosynthesis by showing higher d’ in comparison to the conventional FBP method.
Focal spot characteristics are one of the key determinants for system resolution. Focal spot size drives source blurring, focal spot aspect ratio using the line focus principle drives resolution loss away from isocenter, and focal spot deflection improves sampling. The purpose of this work is to introduce focal spot rotation as a possible new mechanism to fine-tune resolution tradeoffs. A conventional design orients a rectangular focal spot towards isocenter, with resolution decreasing with distance from isocenter. We propose rotating the focal spot so that it is pointed a small distance from isocenter (for example, to a point 10 cm right of isocenter). This improves resolution to the right of isocenter, decreases resolution slightly at isocenter and decreases resolution significantly to the left of isocenter. In a full scan, each ray passing through a point far from isocenter will be sampled twice, once with a larger and once with a smaller effective focal spot. This data can be appropriately combined during reconstruction to boost the limiting radial resolution of the scan, improving the resolution homogeneity of the scanner. Dynamic rotation of the focal spot, similar to dynamic deflection, can be implemented using electromagnetic steering and has other advantages.
This work interprets the internal representations of deep neural networks trained for classification of diseased tissue in 2D mammograms. We propose an expert-in-the-loop inter- pretation method to label the behavior of internal units in convolutional neural networks (CNNs). Expert radiologists identify that the visual patterns detected by the units are correlated with meaningful medical phenomena such as mass tissue and calcificated vessels. We demonstrate that several trained CNN models are able to produce explanatory descriptions to support the final classification decisions. We view this as an important first step toward interpreting the internal representations of medical classification CNNs and explaining their predictions.
We present a fast, noise-efficient, and accurate estimator for material separation using photon-counting x-ray detectors (PCXDs) with multiple energy bin capability. The proposed targeted least squares estimator (TLSE) is an improvement of a previously described A-table method by incorporating dynamic weighting that allows the variance to be closer to the Cramér–Rao lower bound (CRLB) throughout the operating range. We explore Cartesian and average-energy segmentation of the basis material space for TLSE and show that, compared with Cartesian segmentation, the average-energy method requires fewer segments to achieve similar performance. We compare the average-energy TLSE to other proposed estimators—including the gold standard maximum likelihood estimator (MLE) and the A-table—in terms of variance, bias, and computational efficiency. The variance and bias were simulated in the range of 0 to 6 cm of aluminum and 0 to 50 cm of water with Monte Carlo methods. The Average-energy TLSE achieves an average variance within 2% of the CRLB and mean absolute error of 3.68±0.06×10−6 cm. Using the same protocol, the MLE showed variance within 1.9% of the CRLB ratio and average absolute error of 3.10±0.06×10−6 cm but was 50 times slower in our implementations. Compared with the A-table method, TLSE gives a more homogenously optimal variance-to-CRLB ratio in the operating region. We show that variance in basis material estimates for TLSE is lower than that of the A-table method by as much as ∼36% in the peripheral region of operating range (thin or thick objects). The TLSE is a computationally efficient and fast method for material separation with PCXDs, with accuracy and precision comparable to the MLE.
Charge sharing, scatter and fluorescence events in a photon counting detector (PCD) can result in multiple counting of a single incident photon in neighboring pixels. This causes energy distortion and correlation of data across energy bins in neighboring pixels (spatio-energy correlation). If a “macro-pixel” is formed by combining multiple small pixels, it will exhibit correlations across its energy bins. Charge sharing and fluorescence escape are dependent on pixel size and detector material. Accurately modeling these effects can be crucial for detector design and for model based imaging applications. This study derives a correlation model for the multi-counting events and investigates the effect in virtual non-contrast and effective monoenergetic imaging. Three versions of 1 mm2 square CdTe macro-pixel were compared: a 4×4 grid, 2×2 grid, or 1×1 composed of pixels with side length 250 μm, 500 μm, or 1 mm, respectively. The same flux was applied to each pixel, and pulse pile-up was ignored. The mean and covariance matrix of measured photon counts is derived analytically using pre-computed spatio-energy response functions (SERF) estimated from Monte Carlo simulations. Based on the Cramer-Rao Lower Bound, a macro-pixel with 250×250 μm2 sub-pixels shows ~2.2 times worse variance than a single 1 mm2 pixel for spectral imaging, while its penalty for effective monoenergetic imaging is <10% compared to a single 1 mm2 pixel.
Iterative reconstruction has become a popular route for dose reduction in CT scans. One method for assessing the dose reduction of iterative reconstruction is to use a low contrast detectability phantom. The apparent improvement in detectability can be very large on these phantoms, with many studies showing dose reduction in excess of 50%. In this work, we show that much of the advantage of iterative reconstruction in this context can be explained by differences in slice thickness. After adjusting the effective reconstruction kernel by blurring filtered backprojection images to match the shape of the noise power spectrum of iterative reconstruction, we produce thick slices and compare the two reconstruction algorithms. The remaining improvement from iterative reconstruction, at least in scans with relatively uniform statistics in the raw data, is significantly reduced. Hence, the effective slice thickness in iterative reconstruction may be larger than that of filtered backprojection, explaining some of the improvement in image quality.
KEYWORDS: Sensors, Signal detection, Photons, Photodetectors, Bone, Logic, Monte Carlo methods, Computed tomography, Imaging spectroscopy, Signal analyzers
Energy-discriminating, photon-counting (EDPC) detectors are attractive for their potential for improved detective quantum efficiency and for their spectral imaging capabilities. However, at high count rates, counts are lost, the detected spectrum is distorted, and the advantages of EDPC detectors disappear. Existing EDPC detectors identify counts by analyzing the signal with a bank of comparators. We explored alternative methods for pulse detection for multibin EDPC detectors that could improve performance at high count rates. The detector signal was simulated in a Monte Carlo fashion assuming a bipolar shape and analyzed using several methods, including the conventional bank of comparators. For example, one method recorded the peak energy of the pulse along with the width (temporal extent) of the pulse. The Cramer–Rao lower bound of the variance of basis material estimates was numerically found for each method. At high count rates, the variance in water material (bone canceled) measurements could be reduced by as much as an order of magnitude. Improvements in virtual monoenergetic images were modest. We conclude that stochastic noise in spectral imaging tasks could be reduced if alternative methods for pulse detection were utilized.
Iterative reconstruction methods have become very popular and show the potential to reduce dose. We present a limit to
the maximum dose reduction possible with new reconstruction algorithms obtained by analyzing the information content
of the raw data, assuming the reconstruction algorithm does not have a priori knowledge about the object or correlations
between pixels. This limit applies to the task of estimating the density of a lesion embedded in a known background
object, where the shape of the lesion is known but its density is not. Under these conditions, the density of the lesion can
be estimated directly from the raw data in an optimal manner. This optimal estimate will meet or outperform the
performance of any reconstruction method operating on the raw data, under the condition that the reconstruction method
does not introduce a priori information. The raw data bound can be compared to the lesion density estimate from FBP in
order to produce a limit on the dose reduction possible from new reconstruction algorithms. The possible dose reduction
from iterative reconstruction varies with the object, but for a lesion embedded in the center of a water cylinder, it is less
than 40%. Additionally, comparisons between iterative reconstruction and filtered backprojection are sometimes
confounded by the effect of through-slice blurring in the iterative reconstruction. We analyzed the magnitude of the
variance reduction brought about by through-slice blurring on scanners from two different vendors and found it to range
between 11% and 48%.
C-arm-based cone-beam CT (CBCT) systems with flat-panel detectors are suitable for diagnostic knee imaging due to their potentially flexible selection of CT trajectories and wide volumetric beam coverage. In knee CT imaging, over-exposure artifacts can occur because of limitations in the dynamic range of the flat panel detectors present on most CBCT systems. We developed a straightforward but effective method for correction and detection of over-exposure for an Automatic Exposure Control (AEC)-enabled standard knee scan incorporating a prior low dose scan. The radiation dose associated with the low dose scan was negligible (0.0042mSv, 2.8% increase) which was enabled by partially sampling the projection images considering the geometry of the knees and lowering the dose further to be able to just see the skin-air interface. We combined the line integrals from the AEC and low dose scans after detecting over-exposed regions by comparing the line profiles of the two scans detector row-wise. The combined line integrals were reconstructed into a volumetric image using filtered back projection. We evaluated our method using in vivo human subject knee data. The proposed method effectively corrected and detected over-exposure, and thus recovered the visibility of exterior tissues (e.g., the shape and density of the patella, and the patellar tendon), incorporating a prior low dose scan with a negligible increase in radiation exposure.
Striped ratio grids are a new concept for scatter management in cone-beam CT. These grids are a modification of
conventional anti-scatter grids and consist of stripes which alternate between high grid ratio and low grid ratio. Such a
grid is related to existing hardware concepts for scatter estimation such as blocker-based methods or primary
modulation, but rather than modulating the primary, the striped ratio grid modulates the scatter. The transitions between
adjacent stripes can be used to estimate and subtract the remaining scatter. However, these transitions could be
contaminated by variation in the primary radiation. We describe a simple nonlinear image processing algorithm to
estimate scatter, and proceed to validate the striped ratio grid on experimental data of a pelvic phantom. The striped ratio
grid is emulated by combining data from two scans with different grids. Preliminary results are encouraging and show a
significant reduction of scatter artifact.
The dynamic, piecewise-linear attenuator has been proposed as a concept which can shape the radiation flux incident on the patient. By reducing the signal to photon-rich measurements and increasing the signal to photon-starved measurements, the piecewise-linear attenuator has been shown to improve dynamic range, scatter, and variance and dose metrics in simulation. The piecewise-linear nature of the proposed attenuator has been hypothesized to mitigate artifacts at transitions by eliminating jump discontinuities in attenuator thickness at these points. We report the results of a prototype implementation of this concept. The attenuator was constructed using rapid prototyping technologies and was affixed to a tabletop x-ray system. Images of several sections of an anthropormophic pediatric phantom were produced and compared to those of the same system with uniform illumination. The thickness of the illuminated slab was limited by beam collimation and an analytic water beam hardening correction was used for both systems. Initial results are encouraging and show improved image quality, reduced dose and low artifact levels.
Energy-discriminating, photon counting (EDPC) detectors have been proposed for CT systems for their spectral imaging
capabilities, improved dose efficiency and higher spatial resolution. However, these advantages disappear at high flux
because of the damaging effects of pulse pileup. From an information theoretic standpoint, spectral information is lost.
The information loss is particularly high when we assume that the EDPC detector extracts information using a bank of
comparators, as current EDPC detectors do. We analyze the use of alternative pulse detection logic which could preserve
information in the presence of pileup. For example, the peak-only detector counts only a single event at the peak energy
of multiple pulses which are piled up. We describe and evaluate five of these alternatives in simulation by numerically
estimating the Cramer-Rao lower bound of the variance. At high flux, alternative mechanisms outperform comparators.
In spectral imaging tasks, the variance reduction can be as high as an order of magnitude.
By varying its thickness to compensate for the different path length through the patient as a function of fan angle, a pre-patient bowtie filter modulates flux distribution to reduce patient dose, scatter, and detector dynamic range, and to improve image quality. A dynamic bowtie filter is superior to its traditional, static counterpart in its ability to adjust its thickness along different fan and view angles to suit a specific patient and task. Among the proposed dynamic bowtie designs, the piecewise-linear and the digital beam attenuators offer more flexibility than conventional filters, but rely on analog positioning of a limited number of wedges. In this work, we introduce a new approach with digital control, called the fluid-filled dynamic bowtie filter. It is a two-dimensional array of small binary elements (channels filled or unfilled with attenuating liquid) in which the cumulative thickness along the x-ray path contributes to the bowtie’s total attenuation. Using simulated data from a pelvic scan, the performance is compared with the piecewise-linear attenuator. The fluid-filled design better matches the desired target attenuation profile and delivers a 4.2x reduction in dynamic range. The variance of the reconstruction (or noise map) can also be more homogeneous. In minimizing peak variance, the fluid-filled attenuator shows a 3% improvement. From the initial simulation results, the proposed design has more control over the flux distribution as a function of both fan and view angles.
KEYWORDS: Image segmentation, Aluminum, Signal attenuation, X-ray detectors, Monte Carlo methods, Photodetectors, X-rays, Calibration, Error analysis, Medicine
We present a fast, noise-efficient, and accurate estimator for material separation using photon-counting x-ray detectors
(PCXDs) with multiple energy bin capability. The proposed targeted least squares estimator (TLSE) improves a
previously proposed A-Table method by incorporating dynamic weighting that allows noise to be closer to the Cramér-
Rao Lower Bound (CRLB) throughout the operating range. We explore Cartesian and average-energy segmentation of
the basis material space for TLSE, and show that iso-average-energy contours require fewer segments compared to
Cartesian segmentation to achieve similar performance. We compare the iso-average-energy TLSE to other proposed
estimators - including the gold standard maximum likelihood estimator (MLE) and the A-Table1 - in terms of variance,
bias and computational efficiency. The variance and bias of this estimator between 0 to 6 cm of aluminum and 0 to 50
cm of water is simulated with Monte Carlo methods. Iso-average energy TLSE achieves an average variance within 2%
of CRLB, and mean of absolute error of (3.68 ± 0.06) x 10-6 cm. Using the same protocol, MLE showed variance-to-
CRLB ratio and average bias of 1.0186 ± 0.0002 and (3.10 ± 0.06) x 10-6 cm, respectively, but was 50 times slower in
our simulation. Compared to the A-Table method, TLSE gives a more homogenous variance-to-CRLB profile in the
operating region. We show that variance-to-CRLB for TLSE is lower by as much as ~36% than A-Table method in the
peripheral region of operation (thin or thick objects). The TLSE is a computationally efficient and fast method for
implementing material separation technique in PCXDs, with performance parameters comparable to the MLE.
Purpose: Photon counting x-ray detectors (PCXD) may improve dose-efficiency but are hampered by limited count rate. They generally have imperfect energy response. Multi-layer ("in-depth") detectors have been proposed to enable higher count rates but the potential benefit of the depth information has not been explored. We conducted a simulation study to compare in-depth detectors against single layer detectors composed of common materials. Both photon counting and energy integrating modes were studied. Methods: Polyenergetic transmissions were simulated through 25cm of water and 1cm of calcium. For PCXD composed of Si, GaAs or CdTe a 120kVp spectrum was used. For energy integrating x-ray detectors (EIXD) made from GaAs, CdTe or CsI, spectral imaging was done using 80 and 140kVp and matched dose. Semi-ideal and phenomenological energy response models were used. To compare these detectors, we computed the Cramér-Rao lower bound (CRLB) of the variance of basis material estimates. Results: For PCXDs with perfect energy response, depth data provides no additional information. For PCXDs with imperfect energy response and for EIXDs the improvement can be significant. E.g., for a CdTe PCXD with realistic energy response, depth information can reduce the variance by ~50%. The improvement depends on the x-ray spectrum. For a semi-ideal Si detector and a narrow x-ray spectrum the depth information has minimal advantage. For EIXD, the in-depth detector has consistent variance reduction (15% and 17%~19% for water and calcium, respectively). Conclusions: Depth information is beneficial to spectral imaging for both PCXD and EIXD. The improvement depends critically on the detector energy response.
The ability to customize the incident x-ray fluence in CT via beam-shaping filters or mA modulation is known to
improve image quality and/or reduce radiation dose. Previous work has shown that complete control of x-ray fluence
(ray-by-ray fluence modulation) would further improve dose efficiency. While complete control of fluence is not
currently possible, emerging concepts such as dynamic attenuators and inverse-geometry CT allow nearly complete
control to be realized. Optimally using ray-by-ray fluence modulation requires solving a very high-dimensional
optimization problem. Most optimization techniques fail or only provide approximate solutions. We present efficient
algorithms for minimizing mean or peak variance given a fixed dose limit. The reductions in variance can easily be
translated to reduction in dose, if the original variance met image quality requirements. For mean variance, a closed form
solution is derived. The peak variance problem is recast as iterated, weighted mean variance minimization, and at each
iteration it is possible to bound the distance to the optimal solution. We apply our algorithms in simulations of scans of
the thorax and abdomen. Peak variance reductions of 45% and 65% are demonstrated in the abdomen and thorax,
respectively, compared to a bowtie filter alone. Mean variance shows smaller gains (about 15%).
Photon-counting x-ray detectors (PCXDs) are being investigated as a replacement for conventional x-ray detectors
because they promise several advantages, including better dose efficiency, higher resolution and spectral imaging.
However, many of these advantages disappear when the x-ray flux incident on the detector is too high. We recently
proposed a dynamic, piecewise-linear attenuator (or beam shaping filter) that can control the flux incident on the
detector. This can restrict the operating range of the PCXD to keep the incident count rate below a given limit. We
simulated a system with the piecewise-linear attenuator and a PCXD using raw data generated from forward projected
DICOM files. We investigated the classic paralyzable and nonparalyzable PCXD as well as a weighted average of the
two, with the weights chosen to mimic an existing PCXD (Taguchi et al, Med Phys 2011). The dynamic attenuator has
small synergistic benefits with the nonparalyzable detector and large synergistic benefits with the paralyzable detector.
Real PCXDs operate somewhere between these models, and the weighted average model still shows large benefits from
the dynamic attenuator. We conclude that dynamic attenuators can reduce the count rate performance necessary for
adopting PCXDs.
KEYWORDS: Attenuators, Modulation, Signal attenuation, Monte Carlo methods, Image quality standards, Convex optimization, Abdomen, Optical filters, Fluctuations and noise, Photons
Dynamic attenuators are beam shaping filters that can customize the x-ray illumination field to the clinical task and for
each view. These dynamic attenuators replace traditional attenuators (or “bowtie filters”) and decrease radiation dose,
dynamic range, and scatter when compared to their static counterparts. We propose a one-dimensional dynamic
attenuator that comprises multiple wedges with axially-dependent triangular cross-sections, and which are translated in
the axial direction. These wedges together produce a time-varying, piecewise-linear attenuation function. We investigate
different control methods for this attenuator and estimate the ability of the dynamic attenuator to reduce dose while
maintaining the peak variance of the scan. With knowledge of the patient anatomy, the dynamic attenuator can be
controlled by solving a convex optimization problem. This knowledge could be determined from a low dose pre-scan.
Absent this information, various heuristics can be used. We simulate the dynamic attenuator on datasets of the thorax,
abdomen, and a targeted scan of an abdominal aortic aneurysm. The dose and scatter-to-primary ratio (SPR) are
estimated using Monte Carlo simulations, and the noise is calculated analytically. Compared to a system using the
standard bowtie with typical mA modulation, dose reductions of 50% are observed. Compared to an optimized, patientspecific mA modulation, the typical dose reduction is 30%. If the dynamic attenuator is controlled with a heuristic, typical dose reductions are also 30%. The gains are larger in the targeted scan. The SPR is also reduced by 20% in the abdomen. We conclude that the dynamic attenuator has significant potential to reduce dose without increasing the peak variance of the scan.
KEYWORDS: Reconstruction algorithms, Tissues, Image restoration, Algorithm development, Radiotherapy, Data modeling, Detection and tracking algorithms, Data analysis, Image filtering, Medicine
Truncation artifacts arise when the object being imaged extends past the scanned field of view (SFOV). The line
integrals which lie beyond the SFOV are unmeasured, and reconstruction with traditional filtered backprojection (FBP)
produces bright signal artifacts at the edge of the SFOV and little useful information outside the SFOV. A variety of
techniques have been proposed to correct for truncation artifacts by estimating the unmeasured rays. We explore an
alternative, iterative correction technique that reduces the artifacts and recovers the support (or outline) of the object that is consistent with the measured rays. We assume that the support is filled uniformly with tissue of a given CT number (for example, water-equivalent soft tissue) and segment the region outside the SFOV in a dichotomous fashion into tissue and air. In general, any choice for the object support will not be consistent with the measured rays in that a
forward projection of the image containing the proposed support will not match the measured rays. The proposed
algorithm reduces this inconsistency by deforming the object support to better match the measured rays. We initialize the reconstruction using the water cylinder extrapolation algorithm, an existing truncation artifact correction technique, but other starting algorithms can be used. The estimate of the object support is then iteratively deformed to reduce the inconsistency with the measured rays. After several iterations, forward projection is used to estimate the missing rays. Preliminary results indicate that this iterative, support recovery technique is able to produce superior reconstructions in the case of significant truncation compared to water cylinder extrapolation.
Stationary source inverse-geometry CT (SS-IGCT) has been proposed as a new system architecture that has several key
advantages over traditional cone beam CT (CBCT). One advantage is the potential for acquiring a large volume of
interest with minimal cone-beam artifacts and with very high temporal resolution. We anticipate that SS-IGCT will use
large, stationary source arrays, with gaps in between separate source array modules. These gaps make reconstruction
challenging because most analytic reconstruction algorithms assume a continuous source trajectory. SS-IGCT is capable
of producing the same dataset as a traditional scanner taking multiple overlapping axial scans, but with segments of the
views missing from each axial scan because of gaps. We propose the following, two-stage volumetric reconstruction
algorithm. In the first stage, the missing rays are estimated in a spatially varying fashion using available data and
geometric considerations, and reconstruction proceeds with standard algorithms. The missing data are then re-estimated
by a forward projection step. These new estimates are quite good and the reconstruction can be performed again using
any algorithm that supports multiple parallel axial scans. Although inspired by iterative reconstruction, our algorithm
only needs one "iteration" of forward- and back-projection in practice and is efficient. Simulations of a thorax phantom
were performed showing the efficacy of this technique and the ability of SS-IGCT to suppress cone-beam artifacts
compared to conventional CBCT. The noise and resolution characteristics are comparable to that of CBCT.
Traditional CT systems face a tradeoff between temporal resolution, volumetric coverage and cone beam artifacts and
also have limited ability to customize the distribution of incident x-rays to the imaging task. Inverse geometry CT
(IGCT) can overcome some of these limitations by placing a small detector opposite a large, rotating scanned source
array. It is difficult to quickly rotate this source array to achieve rapid imaging, so we propose using stationary source
arrays instead and investigate the feasibility of such a system. We anticipate that distinct source arrays will need to be
physically separated, creating gaps in the sinogram. Symmetry can be used to fill the missing rays except those
connecting gaps. With three source arrays, a large triangular field of view emerges. As the small detector orbits the
patient, each source spot must be energized at multiple specifically designed times to ensure adequate sampling. A
timing scheme is proposed that avoids timing clashes, efficiently uses the detector, and allows for simple collimation.
The two-dimensional MTF, noise characteristics, and artifact levels are all found to be comparable to parallel-beam
systems. A complete, 100 millisecond volumetric scan may be feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.