For the detection of very small objects, high resolution detectors are expected to provide higher dose efficiency. We assessed this impact of increased resolution on a clinical photon counting detector CT (PCD-CT) by comparing its detectability in high resolution and standard resolution (with 2x2 binning and larger focal spot) modes. A 50𝜇𝑚-thin metal wire was placed in a thorax phantom and scanned in both modes at three exposure levels (12, 15, and 18 mAs); acquired data were reconstructed with three reconstruction kernels (Br40, Br68, and Br76, from smooth to sharp). A scanning nonprewhitening model observer searched for the wire location within each slice independently. Detection performance was quantified as area under the exponential transform of the free response ROC curve. The high-resolution mode had the mean AUCs at 18 mAs of 0.45, 0.49, and 0.65 for Br40, Br68, and Br76, respectively, which were 2 times, 3.6 times, and 4.6 times those of the standard resolution mode. The high-resolution mode achieved greater AUC at 12 mAs than the standard resolution mode at 18 mAs for every reconstruction kernel, but improvements were larger at sharper kernels. The results are consistent with the greater suppression of noise aliasing expected at higher frequencies with high resolution CT. This work illustrates that PCD-CT can provide large dose efficiency gains for detection tasks of small, high contrast lesions.
The performance of a CT scanner for detectability tasks is difficult to precisely measure. Metrics such as contrast-to-noise ratio, modulation transfer function, and noise power spectrum do not predict detectability in the context of nonlinear reconstruction. We propose to measure detectability using a dense search challenge: a phantom is embedded with hundreds of target objects at random locations, and a human or numerical observer analyzes the reconstruction and reports on suspected locations of all target objects. The reported locations are compared to ground truth to produce a figure of merit, such as area under the curve (AUC), that is sensitive to the acquisition dose and the dose efficiency of the CT scanner. We used simulations to design such a dense search challenge phantom and found that detectability could be measured with precision better than 5%. Test 3D prints using the PixelPrint technique showed the feasibility of this technique.
A dynamic prepatient attenuator can modulate flux in a computed tomography (CT) system along both fan and view angles for reduced dose, scatter, and required detector dynamic range. Reducing the dynamic range requirement is crucial for photon counting detectors. One approach, the piecewise-linear attenuator (Hsieh and Pelc, Med Phys 2013), has shown promising results both in simulations and an initial prototype. Multiple wedges, each covering a different fan angle range, are moved in the axial direction to change thickness seen in an axial slice. We report on an implementation of a filter with precision components and a control algorithm targeted for a tabletop system. Algorithms for optimizing wedge position and mA modulation and for correcting bowtie-specific beam-hardening are proposed. In experiments, the error between expected and observed bowtie transmission was ∼2 % on average and ∼7 % at maximum for a chest phantom. Within object boundaries, the observed flux dynamic ranges of 42 for a chest and 25 for an abdomen were achieved, corresponding to a reduction factor of 5 and 11 from the object scans without the bowtie. With beam hardening correction, the CT number in soft tissue regions was improved by 79 HU and deviated by 7 HU on average from clinical scanner CT images. The implemented piecewise-linear attenuator is able to dynamically adjust its thickness with high precision to achieve flexible flux control.
A dynamic bowtie filter can modulate flux along both fan and view angles for reduced dose, scatter, and required detector dynamic range. Reducing the dynamic range requirement is crucial for photon counting detectors. One approach, the piecewise-linear attenuator (Hsieh and Pelc, Med Phys 2013), has shown promising results both in simulations and an initial prototype. Multiple wedges, each covering a different range of fan angle, are moved in the axial direction to change their attenuating thickness as seen in an axial slice. We report on an implementation of a filter with precision components and a control algorithm targeted for operation on a table-top system. Algorithms for optimizing wedge position and mA modulation and for correcting bowtie-specific beam-hardening artifacts are proposed. In experiments, the error between expected and observed bowtie transmission was ~2% on average and ~7% at maximum for a chest phantom. Within object boundaries, the observed flux dynamic ranges of 42, for a chest phantom, and 25, for an abdomen phantom were achieved, corresponding to a reduction factor of 5 and 11 from the object scans without the bowtie. With beam hardening correction, the mean CT number in soft tissue regions was improved by 79 HU on average, and deviated by 7 HU on average from clinical scanner CT images. The implemented piecewise-linear attenuator is able to dynamically adjust its thickness with high precision to achieve flexible flux control.
Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.
With many attractive attributes, photon counting detectors with many energy bins are being considered for clinical CT
systems. In practice, a large amount of projection data acquired for multiple energy bins must be transferred in real time
through slip rings and data storage subsystems, causing a bandwidth bottleneck problem. The higher resolution of these
detectors and the need for faster acquisition additionally contribute to this issue. In this work, we introduce a new
approach to lossless compression, specifically for projection data from photon counting detectors, by utilizing the
dependencies in the multi-energy data. The proposed predictor estimates the value of a projection data sample as a
weighted average of its neighboring samples and an approximation from other energy bins, and the prediction residuals
are then encoded. Context modeling using three or four quantized local gradients is also employed to detect edge
characteristics of the data. Using three simulated phantoms including a head phantom, compression of 2.3:1-2.4:1 was
achieved. The proposed predictor using zero, three, and four gradient contexts was compared to JPEG-LS and the ideal
predictor (noiseless projection data). Among our proposed predictors, three-gradient context is preferred with a
compression ratio from Golomb coding 7% higher than JPEG-LS and only 3% lower than the ideal predictor. In
encoder efficiency, the Golomb code with the proposed three-gradient contexts has higher compression than block
floating point. We also propose a lossy compression scheme, which quantizes the prediction residuals with scalar
uniform quantization using quantization boundaries that limit the ratio of quantization error variance to quantum noise
variance. Applying our proposed predictor with three-gradient context, the lossy compression achieved a compression
ratio of 3.3:1 but inserted a 2.1% standard deviation of error compared to that of quantum noise in reconstructed images.
From the initial simulation results, the proposed algorithm shows good control over the bits needed to represent multienergy
projection data.
The dynamic, piecewise-linear attenuator has been proposed as a concept which can shape the radiation flux incident on the patient. By reducing the signal to photon-rich measurements and increasing the signal to photon-starved measurements, the piecewise-linear attenuator has been shown to improve dynamic range, scatter, and variance and dose metrics in simulation. The piecewise-linear nature of the proposed attenuator has been hypothesized to mitigate artifacts at transitions by eliminating jump discontinuities in attenuator thickness at these points. We report the results of a prototype implementation of this concept. The attenuator was constructed using rapid prototyping technologies and was affixed to a tabletop x-ray system. Images of several sections of an anthropormophic pediatric phantom were produced and compared to those of the same system with uniform illumination. The thickness of the illuminated slab was limited by beam collimation and an analytic water beam hardening correction was used for both systems. Initial results are encouraging and show improved image quality, reduced dose and low artifact levels.
By varying its thickness to compensate for the different path length through the patient as a function of fan angle, a pre-patient bowtie filter modulates flux distribution to reduce patient dose, scatter, and detector dynamic range, and to improve image quality. A dynamic bowtie filter is superior to its traditional, static counterpart in its ability to adjust its thickness along different fan and view angles to suit a specific patient and task. Among the proposed dynamic bowtie designs, the piecewise-linear and the digital beam attenuators offer more flexibility than conventional filters, but rely on analog positioning of a limited number of wedges. In this work, we introduce a new approach with digital control, called the fluid-filled dynamic bowtie filter. It is a two-dimensional array of small binary elements (channels filled or unfilled with attenuating liquid) in which the cumulative thickness along the x-ray path contributes to the bowtie’s total attenuation. Using simulated data from a pelvic scan, the performance is compared with the piecewise-linear attenuator. The fluid-filled design better matches the desired target attenuation profile and delivers a 4.2x reduction in dynamic range. The variance of the reconstruction (or noise map) can also be more homogeneous. In minimizing peak variance, the fluid-filled attenuator shows a 3% improvement. From the initial simulation results, the proposed design has more control over the flux distribution as a function of both fan and view angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.