The amount of X-ray dose expresses itself as the noise level in image volume after reconstruction in clinical CT scans. It is important to understand the interaction between the dose, noise and reconstruction, which helps to guide the design of CT systems and reconstruction algorithms. Based on the fact that most of practical reconstruction algorithms in clinical CT systems are implemented as filtered back-projection, in this work, a unified analytical framework is proposed to establish the connection between dose, noise and view weighting functions of different reconstruction algorithms in CT helical scans. The proposed framework helps one better understand the relationship between X-ray dose and image noise and is instrumental on how to design the view weighting function in reconstruction without extensive simulations and experiments. Even though certain assumptions were made in order to simplify the analytical model, experimental results using both simulation data and real CT scan data show the proposed model is reasonably accurate even for objects of human body shape. In addition, based on the proposed framework an analytical form of theoretically optimal dose efficiency as a function of helical pitch is also derived, which suggests a somehow unintuitive but interesting conclusion that the theoretically optimal dose efficiency generally varies with helical pitch.
KEYWORDS: Computed tomography, Image quality, Imaging systems, Reconstruction algorithms, Statistical analysis, Signal to noise ratio, Matrices, Data analysis, Medical imaging, Systems modeling
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
Radiation dose on patient has become a major concern today for Computed Tomography (CT) imaging in clinical practice. Various hardware and algorithm solutions have been designed to reduce dose. Among them, iterative reconstruction (IR) has been widely expected to be an effective dose reduction approach for CT. However, there is no clear understanding on the exact amount of dose saving an IR approach can offer for various clinical applications. We know that quantitative image quality assessment should be task-based. This work applied mathematical model observers to study detectability performance of CT scan data reconstructed using an advanced IR approach as well as the conventional filtered back-projection (FBP) approach. The purpose of this work is to establish a practical and robust approach for CT IR detectability image quality evaluation and to assess the dose saving capability of the IR method under study. Low contrast (LC) objects imbedded in head size and body size phantoms were imaged multiple times with different dose levels. Independent signal present and absent pairs were generated for model observer study training and testing. Receiver Operating Characteristic (ROC) curves for location known exact and location ROC (LROC) curves for location unknown as well as their corresponding the area under the curve (AUC) values were calculated. Results showed approximately 3 times dose reduction has been achieved using the IR method under study.
KEYWORDS: Reconstruction algorithms, Tissues, Image restoration, Algorithm development, Radiotherapy, Data modeling, Detection and tracking algorithms, Data analysis, Image filtering, Medicine
Truncation artifacts arise when the object being imaged extends past the scanned field of view (SFOV). The line
integrals which lie beyond the SFOV are unmeasured, and reconstruction with traditional filtered backprojection (FBP)
produces bright signal artifacts at the edge of the SFOV and little useful information outside the SFOV. A variety of
techniques have been proposed to correct for truncation artifacts by estimating the unmeasured rays. We explore an
alternative, iterative correction technique that reduces the artifacts and recovers the support (or outline) of the object that is consistent with the measured rays. We assume that the support is filled uniformly with tissue of a given CT number (for example, water-equivalent soft tissue) and segment the region outside the SFOV in a dichotomous fashion into tissue and air. In general, any choice for the object support will not be consistent with the measured rays in that a
forward projection of the image containing the proposed support will not match the measured rays. The proposed
algorithm reduces this inconsistency by deforming the object support to better match the measured rays. We initialize the reconstruction using the water cylinder extrapolation algorithm, an existing truncation artifact correction technique, but other starting algorithms can be used. The estimate of the object support is then iteratively deformed to reduce the inconsistency with the measured rays. After several iterations, forward projection is used to estimate the missing rays. Preliminary results indicate that this iterative, support recovery technique is able to produce superior reconstructions in the case of significant truncation compared to water cylinder extrapolation.
Model-based iterative reconstruction (MBIR) methods based on maximum a posteriori (MAP) estimation have been
recently introduced to multi-slice CT scanners. The model-based approach has shown promising image quality
improvement with reduced radiation dose compared to conventional FBP methods, but the associated high computation
cost limits its widespread use in clinical environments. Among the various choices of numerical algorithms to optimize
the MAP cost function, simultaneous update methods such as the conjugate gradient (CG) method have a relatively high
level of parallelism to take full advantage of a new generation of many-core computing hardware. With proper
preconditioning techniques, fast convergence speeds of CG algorithms have been demonstrated in 3D emission and 2D
transmission reconstruction. However, 3D transmission reconstruction using preconditioned conjugate gradient (PCG)
has not been reported. Additional challenges in applying PCG in 3D CT reconstruction include the large size of clinical
CT data, shift-variant and incomplete sampling, and complex regularization schemes to meet the diagnostic standard of
image quality. In this paper, we present a ramp-filter based PCG algorithm for 3D CT MBIR. Convergence speeds of
algorithms with and without using the preconditioner are compared.
KEYWORDS: Stray light, Convolution, Wavelet transforms, Data compression, Point spread functions, Quantization, Wavelets, Transform theory, Digital photography, Imaging systems
Space-varying convolution often arises in the modeling or restoration of images captured by optical imaging
systems. For example, in applications such as microscopy or photography the distortions introduced by lenses
typically vary across the field of view, so accurate restoration also requires the use of space-varying convolution.
While space-invariant convolution can be efficiently implemented with the Fast Fourier Transform (FFT),
space-varying convolution requires direct implementation of the convolution operation, which can be very computationally
expensive when the convolution kernel is large.
In this paper, we develop a general approach to the efficient implementation of space-varying convolution
through the use of matrix source coding techniques. This method can dramatically reduce computation by
approximately factoring the dense space-varying convolution operator into a product of sparse transforms. This
approach leads to a tradeoff between the accuracy and speed of the operation that is closely related to the
distortion-rate tradeoff that is commonly made in lossy source coding.
We apply our method to the problem of stray light reduction for digital photographs, where convolution
with a spatially varying stray light point spread function is required. The experimental results show that our
algorithm can achieve a dramatic reduction in computation while achieving high accuracy.
Maximum a posteriori (MAP) estimation has been shown to be an effective method for reconstructing images
from optical diffusion tomography data. However, one disadvantage of MAP reconstruction is that it typically
requires the use of iterative methods which are computationally intensive. However, the direct reconstruction
of MAP images is possible when the forward model is linear (or linearized) and the noise and image prior are
assumed Gaussian. These non-iterative MAP reconstruction techniques only require the multiplication of an
inverse matrix by a data vector to compute the reconstruction, but they depend on a combination of lossy source
coding techniques and sparse matrix transforms to make the required matrix-vector product computation both
computationally and memory efficient.
In this paper, we show examples of how non-iterative MAP reconstruction methods can be used to dramatically
reduce computation and storage for MAP reconstruction. Simulations of fluorescence optical diffusion
tomography (FODT) measurements and corresponding reconstructions are used to demonstrate the potential
value of these techniques. Numerical examples show the non-iterative MAP reconstruction can substantially
reduce both storage and computation, as compared to traditional iterative reconstruction methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.