|
1.IntroductionIterative image reconstruction (IIR) for x-ray computed tomography (CT) has been heavily investigated in recent years.1 It is well known that IIR involves a substantially greater computational effort than direct reconstruction algorithms such as filtered back-projection (FBP). For CT, this issue is particularly acute because it is a high-resolution imaging modality, requiring the use of small voxels to achieve the inherent resolution of the CT data.2 Because IIR is based on implicit estimation of the image, a complete representation of the subject within the scanner view domain is also necessary. Objects appearing in the x-ray projections such as the patient couch, which have no relevance to the imaging task, must also be accounted for in the reconstruction volume. Such a complete representation is only necessary for implicit methods and not direct reconstruction algorithms such as FBP, where the inversion equation need only be evaluated at points within an ROI. The combination of IIR requiring the representation of large volumes and use of small voxel exacerbates the computational burden issue for CT image reconstruction. To address this problem, we propose a method for direct ROI IIR, which will allow for the use of smaller reconstruction volumes focused on ROIs relevant to a given imaging task. The main tool employed to realize ROI IIR appears in our conference record (CR) publication for the 2014 SPIE medical imaging conference.3 In the CR, we presented an offshoot of our adaptive steepest descent–projection onto convex sets algorithm for edge enhanced image reconstruction applied to clinical digital breast tomosynthesis (DBT) projection data. To reach the desired DBT image quality goals, we proposed to compare the estimated and DBT projection data after convolving with a derivative kernel. The resulting IIR algorithm was seen to behave analogously to a class of analytic image reconstruction methods called or local tomography.4–7 The resulting DBT images accentuated small structures relative to IIR employing the standard data fidelity term. Furthermore, the proposed derivative data fidelity yielded an IIR algorithm robust against heavily truncated projection data allowing for direct image reconstruction of a DBT ROI. It is this latter feature of the modified data fidelity term that we exploit for ROI IIR in CT. The algorithm developed in Ref. 3 was designed to obtain a useful image after only a few iterations, but it was not run to convergence of the optimization problem upon which the algorithm was based. As a result, the obtained images depend on parameters characterizing the derivative filter in addition to other optimization and control parameters of the algorithm. In this article, we aim to characterize the solution of the proposed optimization problem and accordingly we employ an accurate solver developed by Chambolle and Pock (CP).8,9 In this way, we narrow down the IIR parameter space to only those needed in specifying the optimization problem. With the understanding gained by thoroughly investigating the solution of the proposed optimization problem, design of IIR with severely truncated iteration is facilitated. In Sec. 2, we present the proposed optimization problem and the CP algorithm derived to solve it. In Sec. 3, we characterize the solution of the optimization problem with ideal and noisy simulated data for sparse-view and standard CT acquisition. In Sec. 4, we focus on ROI image reconstruction, comparing the use of the proposed derivative weighted data fidelity term with that of a standard Euclidean data fidelity with no additional weighting. Finally, in Sec. 5, we demonstrate ROI reconstruction on actual CT scanner data. 2.Optimization for CT IIR Using a Derivative Weighted Data Fidelity TermWe introduce an optimization problem designed to allow for direct ROI image reconstruction in CT, and present the algorithm used to solve the optimization problem. The CT data model is presented in Sec. 2.1; the particular data fidelity term of interest is introduced in Sec. 2.2, where the link with -tomography is also explained; incorporation of this fidelity into an optimization for CT IIR is discussed in Sec. 2.3; and finally, a pseudocode for the solver is given in Sec. 2.4. 2.1.Data ModelWe employ the standard x-ray transform as the data model for developing CT image reconstruction algorithms where the data function is a line integration over the object function over a ray originating at a source location and point in the direction . For x-ray transmission, imaging samples of this data function are usually estimated by taking the negative logarithm of the transmitted x-ray intensity divided by the incident x-ray intensity. Analytic image reconstruction algorithms such as FBP or fan-beam FBP are devised by seeking direct inversion equations for this model.For IIR, the most common approach, and the approach taken here, is to represent with a finite size expansion set such as pixels and relate the expansion coefficients directly to the discrete samples of the x-ray projection data where is a vector denoting the projection data samples; is the discrete form of the x-ray transform; and is a vector denoting the expansion coefficients. For the present work, is a vector denoting pixel coefficients and is computed by the line-intersection method.2.2.Derivative Weighted Data Fidelity and its Relation to -TomographyThe majority of IIR algorithms that use Eq. (2) employ some form of optimization because the data contain physical factors that can make Eq. (2) inconsistent. This data model may also not specify a unique image if there is insufficient sampling. An example of a simple optimization problem for IIR is to minimize the square of the Euclidean distance between estimated and available projection data where Although there may be no solution to Eq. (2), there will always be a minimizer for Eq. (3). Furthermore, this optimization problem can easily be extended to incorporate prior knowledge on the image , such as a smoothness penalty or physical constraints on pixel values.In Ref. 3, we presented and motivated an alternative data fidelity function where the Euclidean distance is modified by performing numerical differentiation on the estimated and available projections. The symbol refers to continuous differentiation of a data function on each projection. In two-dimensional (2-D) CT, the direction of this differentiation is unambiguous, up to a sign (the sign plays no role because of the norm of the data fidelity), and the coordinate indicates position on the detector. In three-dimensional (3-D) CT, the coordinate is taken to represent the location on the detector in the direction parallel with the x-ray source trajectory. The symbol represents a matrix that encapsulates the action of numerical differentiation in the -direction. As discussed in Ref. 3, the use of this data fidelity can lead to IIR algorithms robust against low-frequency data inconsistency and it can facilitate ROI IIR.In Ref. 3, we also consider the data fidelity modification which combines and in order to provide control over the absolute gray level of images obtained by the use of . We define The second equality states that the sum of the quadratic terms in is equivalent for modifying the derivative filter on the data by adding times the identity matrix. The last form is convenient for developing the solver and this equality only holds if is antisymmetric. Thus, the particular form of can vary, but we require it to be an antisymmetric matrix. The finite differencing weighting of de-emphasizes the low-frequency components of the data, which, in turn, can impact the overall gray level of the reconstructed images. Adding back the standard Euclidean data fidelity with a small constant addresses this issue.The use of differentiation in the data fidelity term has a similar motivation as the local filtering employed in -tomography, and a precise connection between the two approaches can be made. In -tomography, see for example Ref. 7, the 2-D parallel beam FBP algorithm is modified by replacing the ramp filter with filtering of the projection data with a second derivative where is the 2-D Radon transform (forward projection for 2-D parallel-beam CT) and is the adjoint operation, back-projection; the second derivative with respect to the detector coordinate is performed on the continuous 2-D parallel beam sinogram with indicating view angle; and the operator performs 2-D cone filtering on the continuous image function . The 2-D cone filter takes the form , where is the spatial frequency vector counterpart to the spatial vector . The advantage of -tomography is that the filter is local, and it can thus be performed on heavily truncated projection data. The price to be paid is that is reconstructed instead of . Yet, the image can have utility in that the cone filtering yields an image with enhanced edges highlighting the discontinuous boundary between different tissue types.To see the connection between -tomography and use of the data fidelity , we consider the optimization problem The solution set to this optimization problem can be identified by taking the gradient of and setting it equal to zero where the antisymmetry of is used to arrive at the second equation. If the scanning configuration for is a 2-D parallel-beam with complete angular coverage, the following associations can be made in the continuous limitIn the limit of this special case, the normal equation in Eq. (9) becomes As a result, for parallel-beam data the operations on the projection data yields the cone-filtered image and the minimization of can be interpreted as deconvolution from the cone-filtering. Another important implication of this link is that the first step of gradient descent on is the -tomography image, provided that gradient descent is initialized with the zero image. This fact can be important for interpreting the image properties of IIR algorithms exploiting and run with a truncated iteration. A similar connection is available between and modified -tomography, see Eq. (13) of Ref. 7. We remind the reader that can be utilized for any CT configuration even though the connection with -tomography occurs under the special circumstances of parallel ray imaging.2.3.Optimization Problem for CT IIRThe optimization equation we investigate here is The operator is a matrix representing a finite differencing approximation to the image gradient. The number of pixels/voxels in is taken to be and the spatial gradient operator yields a vector-valued image, e.g., , of size or depending on whether the image is 2-D or 3-D, respectively. The vector symbol is used to indicate a spatial vector-valued image, and the spatial magnitude operator, , converts a spatial vector-valued image to a scalar image by taking the vector magnitude at each pixel/voxel. The quantity is the spatial gradient of ; is the gradient magnitude image (GMI); and , the sum over the GMI, is the image total variation (TV). The optimization equation, Eq. (11), seeks the image with lowest combined derivative and Euclidean data discrepancy amongst all images with TV less than or equal to .This particular optimization problem is one variation of many that combines some form of data fidelity with a term involving image TV. The TV term is useful for exploiting sparsity in the GMI for image reconstruction problems with limited data. We use it exactly for this reason; the direct ROI reconstruction problem is a form of limited data acquisition, because this problem is equivalent to the projection truncation problem for IIR. The reason for this is that in the implicit solution of Eq. (2), only transmission rays that pass through the ROI are used. This contrasts with direct reconstruction through FBP, where all projection data are used in filtering before back-projecting to the ROI. The particular form of the gradient sparsity exploiting optimization problem, where the constraint is placed in the image TV, is particularly convenient because the data fidelity metric is actually the sum of two fidelity metrics. Furthermore, for computer simulation studies, as performed in this article, the phantom TV provides a good reference for selecting the parameter . To completely specify Eq. (11) for the following CT simulations, we select a particular form of . As required, is chosen to be purely antisymmetric and it represents convolution with a 21-point kernel. This kernel is generated by smoothing an antisymmetric differencing kernel with a Gaussian of standard deviation measured in detector bin units. The smoothing of the finite differencing kernel is useful for controlling noise texture in the reconstructed images. The kernel is shown in Fig. 1 for different values of . Aside from the usual scanning geometry parameters, there are three parameters—, , and —involved in the optimization problem of study in Eq. (11). The goal of this article is to demonstrate the effect of each of these parameters on the solution of Eq. (11) in order to gain intuition about this optimization problem for various CT imaging configurations and in particular for direct ROI IIR. For real data applications, such as the DBT reconstructions presented in Ref. 3, an IIR algorithm with a truncated iteration may be used. In this case, we do not expect to achieve the solution to Eq. (11) and as a result, the attained reconstructed volume depends on all the parameters of the IIR algorithm further complicating the characterization of the algorithm. Thus, before a thorough investigation of truncated IIR motivated by Eq. (11), it is important to characterize the solution of this optimization problem. To this end, we employ the CP algorithm for its accurate solution. 2.4.CP Algorithm to Solve the Proposed Optimization ProblemWe have found that the CP algorithm provides a useful framework for prototyping convex optimization problems, constrained or unconstrained, for image reconstruction in CT. For such problems our experience indicates that an accurate solution can be obtained in hundreds to thousands of iterations. Although this is too many for practical IIR on realistic size systems with current computational technology, the required number of iterations is acceptable for characterization studies. Nevertheless, we introduce a few slight modifications to Eq. (11) that we have found to improve the convergence rate and thereby facilitate theoretical investigation. In addition, the CP algorithm is primal–dual and it solves convex minimization problems simultaneously with its dual maximization. The details on how to derive the dual maximization and the corresponding CP algorithm instance for CT is presented in Ref. 9. Here, we simply state the primal and dual problems along with the CP pseudocode for their solution. We modify Eq. (11) by introducing two constants, and , Note that and do not affect the solution, but we have found in Ref. 10 that they can have a large impact on the algorithm convergence rate. The parameter is not varied. Instead it is used to balance the magnitude of the linear operators and and we set where the norm applied to a linear operator, also known as the spectral norm, is its largest singular value. This norm can be evaluated by the power method. Use of balances the two linear operators, which have different physical units. The parameter does not impact the solution of Eq. (12) because of the hard constraint, but it can have a strong impact on the CP algorithm convergence rate.The dual maximization to Eq. (12) is where is dual to a data vector and, therefore, has the same size as a sinogram; is dual to an image gradient, having size or for 2-D or 3-D CT, respectively; and the norm yields the magnitude of the largest component of the vector argument. In implementing the back-projection, , care must be taken to insure that its action is equivalent to that of the true matrix transpose of . We note that the transpose of is a finite differencing approximation to the negative divergence of a vector-valued image.The pseudocode for solving the primal, Eq. (12), and dual, Eq. (14), appears in Algorithm 1. This algorithm requires a projection onto an ball,11,12 which is given in Algorithm 2. Although this projection algorithm is described in detail in Ref. 11, we provide it here for completeness. Discussion on convergence criteria for the CP algorithm appears in Refs. 9, 10, and 12, but for the sake of brevity we do not go into detail on convergence here. For the presented results, we have performed the convergence checks and verified that the images are indeed accurate solutions to Eq. (11). Algorithm 1Pseudocode for N steps of the CP algorithm instance for solving Eqs. (12) and (14). For conciseness, we set the data filter Fc=Du+cI. Note that the transpose of this filter is FcT=−Du+cI. The smoothing parameter selects the specific form of Du, see Fig. 1. The expression, (FcX,ν∇), is a linear operator combining the filtered projection with the gradient. This combined operator takes in an image vector and yields the concatenation of a data vector and a spatial vector-valued image.9 In Line 11, the ratio of image vectors is to be computed pixel-wise, and in the case of a zero pixel in the denominator, the ratio for that pixel is set to 1. In Line 12, a scalar-valued image is multiplied by a spatial vector-valued image. This operation is again performed pixel-wise, where the spatial vector at each pixel of t→ is scaled by the corresponding pixel value of r yielding a spatial vector-valued image.
Algorithm 2Pseudocode for the function ProjectOntoℓ1 Ballα(x), which projects x onto the ℓ1-ball of scale γ. In terms of an optimization problem, this function takes in a vector x and returns x* such that x*=argminx′12‖x−x′‖22, such that ‖x′‖1≤α. The vector x is taken to be one-dimensional with length N, and the individual components are labeled xi with index i being an integer in the interval [1,N].
3.Characterization with Simulated Untruncated Projection DataThe simulated 2-D CT data are based on the breast CT imaging application. The computer generated phantom shown in Fig. 2 models fat and fibro-glandular tissues. The complex structure is obtained by generating an image with power-law filtered white noise, followed by thresholding. The structure model is described in Ref. 13. The scanning geometry is 2-D fan-beam CT with full circular coverage, as what would be available in the midplane of an actual breast CT scan. The source-to-isocenter distance is 36 cm, and the source-to-detector distance is 72 cm. A linear detector with 1024 bins is modeled, and for the majority of the simulations, 256 projection views are taken. The exceptions are 1024 views are used for the studies without TV regularization, and 64 views are employed in one study examining the possibility of sparse-view sampling. In this initial series of characterization studies, we examine image reconstruction from ideal noiseless data, where the projections are generated from a phantom defined on the same grid used for the image reconstruction. We also perform studies for inconsistent CT data with noise modeling a typical breast CT scan, and projection operating on mismatched grids. The phantom projections are generated from a grid and the image estimate projections are generated from a grid. Comparison of images presented in this work may not be straight-forward because of the loss of absolute gray level information. For some series of images, in order to have objective image comparisons, we compute the mean, , and standard deviation, , of the pixel values within a specified region contained just inside the boundaries of the test phantom or the field-of-view (FOV), whichever is smaller. The gray scale range is computed by It is stated explicitly in the figure captions when this method is used to compute the image gray scale.To assist in the interpretation of the CT IIR results, we show FBP and -tomography images in Fig. 3. The -tomography image shows the expected enhancement at tissue borders and loss of the absolute gray level. Again, the reason -tomography might be an interesting choice in addition to the edge enhancement is that the filter is local, allowing for projection truncation. The first set of IIR images result from a CP algorithm designed to minimize the standard quadratic data fidelity . The pseudocode appears in Algorithm 2 of Ref. 9. Intermediate images of this algorithm are shown in Fig. 4, where the 1024-view data are noiseless and perfectly consistent. The behavior of the progression of images, starting from a zero initial image, is generic to other first-order nonsequential algorithms applied to minimization of . The image after the first iteration is blurry; in fact, by a similar argument to the one presented in Sec. 2.2, one can show that this image is approximately the result of back-projection applied to the projection data. As the iterations progress, the images become less blurry, and eventually the test phantom is obtained. The next set of IIR images result from a CP algorithm designed to minimize the proposed data fidelity with and . The same pseudocode from Algorithm 2 of Ref. 9 can be employed to minimize this data fidelity. Similar to gradient descent, one can show that the first iteration of this CP algorithm is proportional to a -tomography image if a zero initial estimate is used. Intermediate images of this algorithm are shown in Fig. 5, where the 1024-view data are noiseless and perfectly consistent. Indeed, we observe that the first panel shows an image proportional to that of -tomography and as the algorithm proceeds to convergence we observe that the images tend toward the phantom. Minimization of both and results in accurate recovery of the test phantom, but the approach to this solution is quite different; the first iterate in minimizing already has much of the tissue structure information relative to that of minimizing . Although the image appearance of the early iterations in Fig. 5 has been explained, it is not obvious that the phantom should be recovered accurately at convergence—in light of the finite-differencing filter applied to both system matrix and data in . In order to understand the phantom recovery, we apply singular value decomposition (SVD) in a manner similar to that of Ref. 14. We note that under ideal noiseless conditions, the minimizer of with and satisfies the following linear equation: where is the test phantom. If the combined system matrix is left-invertible then this equation can be reduced to and the minimizer of with and is the test phantom . The system matrix is left-invertible if its condition number is finite. We demonstrate this by performing SVD on smaller systems with the same sampling ratio, computing the corresponding condition numbers, and extrapolating to the system size of interest. The present sampling ratio takes the form of projections onto a detector of bins for an image represented on an pixel grid, where . The largest and smallest singular values for systems with the same sampling ratio are shown in Fig. 6 for varying between 18 and 64. Assuming a linear relationship between the logarithm of these singular values and the logarithm of , a condition number of 8.87 is estimated for the present system corresponding to . Thus, is left-invertible and the minimizer of is , thereby explaining the accurate phantom recovery shown in Fig. 5.It is also interesting to observe the evolution of the CP algorithm iterates when the sampling ratio is more challenging. Reducing the view number to 256 projections results in a system matrix with a much larger condition number. Intermediate images of the CP algorithm are shown in Fig. 7, where the 256 view data are noiseless and perfectly consistent. We again observe that the first panel shows an image proportional to that of -tomography and as the algorithm proceeds to convergence we observe that the images seem to tend toward the phantom; however, the converged image shows noise-like texture in the images resulting from the insufficiency of the number of projections. Interestingly, however, the absolute gray level appears to be accurately recovered despite the poor overall conditioning of with 256 projections. As discussed in Sec. 2.3, the view angle undersampling can be dealt with by exploiting GMI sparsity. To demonstrate this, we solve the optimization problem in Eq. (12) using the pseudocode in Algorithm 1 and again model noiseless consistent 256-view projection data. Because the data are ideal, the parameters and are again set to zero. The TV constraint parameter is selected to be that of the phantom, . The resulting sequence of images is shown in Fig. 8, and they resemble those of Fig. 7 except that at iteration 10 and after the noise-like artifacts seem to be removed by the addition steps constraining the image TV. Furthermore, at convergence, the phantom is recovered to high accuracy. The accurate phantom recovery for this example, of course, depends on the fact that we know the correct TV constraint value , but the purpose here is to show that an ideal program can recover the phantom under ideal conditions. That it can do so is not obvious because the data discrepancy objective is different than the standard Euclidean form. Although the -tomography interpretation of the low iteration images is useful, it is important to keep in mind that the specific evolution of the images depends on the values of all of the parameters in Algorithm 1 in addition to parameters of the optimization problem Eq. (12). The converged image, on the other hand, depends only on parameters of the optimization problem. In order to further test the robustness against view angle undersampling, we reduce the ideal data set to only 64 projections. The resulting image sequence appears in Fig. 9. We observe that Algorithm 1 can accurately recover the test phantom as the image estimates near convergence. The -tomography interpretation of the low iteration images, however, suffers due to the streaks from the view angle undersampling. Nevertheless, the insensitivity of the proposed data discrepancy term to absolute gray level does not appear to hinder the ability of Algorithm 1 to recover the phantom from sparse view data. Finally, we investigate the response of Algorithm 1 to inconsistent projection data. Two forms of inconsistency are introduced in the data model. First, there is mismatch between the discrete phantom grid, , and reconstructed image grid, . Second, noise is introduced with a Poisson-like Gaussian distribution, i.e., the variance of the Gaussian is set equal to the mean, where the mean is the average transmission at each detector bin assuming photons are incident to each bin prior to passing through the subject. Thus, we are modeling a low-dose CT scan as what is typically proposed for breast CT. Images corresponding to the accurate solution of Eq. (12) with the TV constraint parameter selected to be that of the phantom, , and various values of the parameters and , are displayed in Fig. 10. The parameter clearly impacts noise texture, and here, we only aim to show how image quality changes with this parameter. Optimal setting of can only be determined based on the particular image task of the CT scan. Although has little impact on the reconstructed images for these simulations with untruncated projections and full image representation, this parameter helps to control gray level for ROI imaging. 4.Characterization for CT ROI ImagingIn this section, the CT simulations aim at illustrating the utility of Algorithm 1 for ROI imaging, where an incomplete image representation is used that spans only a given ROI. This study is performed in two steps: first, using a full image representation, we reconstruct the FOV of a truncated projection acquisition; and second, using only image pixels in this same FOV, we perform image reconstruction. By the use of the term FOV, we specifically refer to the collection of all pixels that are illuminated by all views in the data set. We use this term instead of “interior tomography,”15 which has a specific meaning in 2-D image reconstruction where the aim is to recover a continuous region interior to the support of the subject from projections illuminating only this interior region. Furthermore, for interior tomography, the data model is the continuous 2-D Radon or x-ray transform, while for the present FOV imaging, the data model is the discrete x-ray transform. We make the distinction between ROI and FOV in order to allow for the design of a scan where the FOV is larger than the ROI in order to avoid potential artifacts at the FOV border. First, we investigate the impact of truncated data while keeping the image representation complete, i.e., the image pixels cover the region where the object support is nonzero. The simulated projection data are ideal and noiseless, containing 256 projections. Each of the projections is truncated so that the FOV of the scan is a centered circle of diameter 9 cm, i.e., half the width of the image array. The left column of images in Fig. 11 corresponds to converged solutions of Eq. (12) for and , and for comparison, the right column of images results from the same optimization problem without the proposed weighting . For this configuration, it is not clear what is the optimal choice of the TV constraint parameter , even though we know what is the test phantom. Using the TV of the phantom within the FOV, and the TV of the entire phantom as lower and upper bounds on , respectively, we reconstruct images for four intermediate values of parametrized by For , use of the weighting seems to make little difference; both images show cupping although that of the weighting image has slightly less cupping than when no weighting is used. As decreases, both sets of images show a decrease in resolution particularly in going from to 0.25; however, for the weighting images edges within the FOV appear to remain sharp relative to the edges in the images generated without the proposed weighting. Moreover, the weighting images show markedly less cupping. A quantitative sense of these observations is obtained in the profile plots of Fig. 12. Second, we investigate the impact of employing an incomplete image representation, where pixels cover only the region of the FOV from the previous study, a centered 9-cm diameter circular ROI. Note that in this case it does not matter whether the projections are truncated to this 9 cm ROI or not. For untruncated projections, measurements corresponding to the rays not intersecting the ROI end up not playing any role in the IIR algorithm. Converged images—with and without the weighting—obtained by Algorithm 1 appear in Fig. 13. It is in the use of the smaller ROI representation where we see a possible major advantage of the weighting. The weighting reduces the data discrepancy incurred by using too small an image representation. As a result, the TV constraint can be applied without destroying structure information in the ROI while the use of the TV constraint without the weighting destroys this information. Inspecting the gray scale windows for both plots, we note that the automatically computed gray scale for the weighting image is substantially lower than the values of the actual phantom, while the image without weighting has approximately the correct absolute gray scale (aside from the fact that the structure information is lost). That the absolute gray scale of the weighting is lost is the primary motivation for introducing the parameter . For the final simulations, we again employ the ROI-only representation, but model inconsistent projection data with grid mismatch and measurement noise, corresponding to photons incident to each detector bin. Again, the data acquisition configuration is chosen so that the FOV coincides with the 9-cm diameter ROI. In the grid of images shown in Figs. 14 and 15, results are shown for various values of and , while is fixed to one. The value of is parametrized in these figures with the multiplicative factor where is the TV of the phantom within the FOV. The same results are shown in both figures, because Fig. 14, with its wide all encompassing gray scale window, provides a sense of the gray level shifting, while Fig. 15 depicts contrast and noise texture more clearly.The leftmost column of images in Fig. 15 shows results for . Each of these images show the structural information of the phantom within the FOV, but the gray values are quite low and independent of . Within this column, the TV constraint is an effective control over image regularization, and the image corresponding to shows almost no noise artifacts while suffering minimal blurring due to the regularization. As increases, the gray levels of the reconstructed images also increase. The restoration of the gray levels, however, comes at the price of the TV constraint becoming less effective. Following the row, the increasing data discrepancy caused by increasing results in image details being smoothed away. Inspecting the column, the image details can be recovered by loosening the TV constraint, but there is a clear loss of detail relative to the images. Nevertheless, for some imaging applications, it may be important to employ the parameter to recover absolute gray level information. 5.Application of ROI IIR to Severely Truncated Projections from Actual CT Scanner DataTo demonstrate application of the proposed algorithm to actual scanner data, we employ an XCounter CT scan of a rat. The data set contains 1000 projections with a detector array where the detection elements are 0.1 mm in width. In order to test the ROI capability, we truncate the projection data to include only transmission rays passing through a cylindrical subvolume of radius 8 mm and height 18 mm. This volume contains part of the rat jaw and a fore paw. For reference, we employed FBP to reconstruct the complete untruncated data set, and display slices that traverse the chosen ROI in Fig. 16. Returning to the truncated data set, we apply FBP, -tomography, and the proposed ROI IIR algorithm with for two different values of the TV constraint, . The constraint values correspond to an average gradient magnitude of and . The resulting images are shown in Fig. 17. The FBP images show the usual cupping in the FOV of the scan, and it is difficult to see structure near or beyond the FOV. The images generated by -tomography do not show a discontinuity at the FOV border, and the edges of structures are clearly seen but the gray levels are, as expected, lost. The ROI IIR algorithm appears to retain some gray level information without suffering from strong discontinuities at the FOV border. The difference in behavior near the FOV border is most striking in the images of the middle column of Fig. 17 which show the rat fore paw. We point out that this study on CT scanner data is only a preliminary demonstration and we did not attempt to optimize image quality over the parameters , , and . Likewise, the implementations of FBP and -tomography are basic because these images serve only as a reference. For the FBP images, we did not explore alternate filter designs nor attempt to counteract the cupping artifact by postprocessing or extrapolating projections. 6.DiscussionWe have presented an IIR algorithm based on an alternative data fidelity term that is designed to facilitate ROI reconstruction. The use of the restricted image representation can allow for IIR with high-resolution image representation without incurring prohibitively high computational and memory demands. Furthermore, the use of the proposed filtering on the data discrepancy preserves edges allowing for a high degree of regularization before structures in the subject are blurred away. The present investigation mainly focuses on image properties of the converged solution of TV constrained data discrepancy minimization, where the images depend on three parameters: controlling regularization, affecting noise texture, and allowing for gray level recovery. It may be that for practical application of the proposed data fidelity term, IIR algorithms operate with truncated iteration. Characterizing such algorithms becomes more complicated as other algorithm parameters in addition to the three of the optimization problems affect the reconstructed image when convergence of the corresponding optimization problem is not achieved. The established link with -tomography, however, can aid in the design of IIR using the proposed derivative weight and operating with truncated iteration. AcknowledgmentsD. N. Kraemer and E. G. Roth were supported by the NSF MedIX REU Program Award No. 1359459. This work was supported in part by NIH R01 Grants Nos. CA158446, CA182264, EB018102, and EB000225. The contents of this article are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health. ReferencesJ. Nuytset al.,
“Modelling the physics in the iterative reconstruction for transmission computed tomography,”
Phys. Med. Biol., 58
(12), R63
–R96
(2013). http://dx.doi.org/10.1088/0031-9155/58/12/R63 PHMBA7 0031-9155 Google Scholar
E. Y. Sidkyet al.,
“A constrained, total-variation minimization algorithm for low-intensity X-ray CT,”
Med. Phys., 38
(3), S117
–S125
(2011). http://dx.doi.org/10.1118/1.3560887 MPHYA6 0094-2405 Google Scholar
E. Y. Sidkyet al.,
“Enhancing tissue structures with iterative image reconstruction for digital breast tomosynthesis,”
Proc. SPIE, 9033 90330W
(2014). http://dx.doi.org/10.1117/12.2043776 PSISDG 0277-786X Google Scholar
A. G. RammA. I. Katsevich, The Radon Transform and Local Tomography, CRC Press, Boca Raton, Florida
(1996). Google Scholar
A. I. Katsevich,
“Cone beam local tomography,”
SIAM J. Appl. Math., 59
(6), 2224
–2246
(1999). http://dx.doi.org/10.1137/S0036139998336043 SMJMAP 0036-1399 Google Scholar
M. A. Anastasioet al.,
“Local cone-beam tomography image reconstruction on chords,”
J. Opt. Soc. Am. A, 24
(6), 1569
–1579
(2007). http://dx.doi.org/10.1364/JOSAA.24.001569 JOAOD6 0740-3232 Google Scholar
J. FrikelE. T. Quinto,
“Characterization and reduction of artifacts in limited angle tomography,”
Inverse Probl., 29
(12), 125007
(2013). http://dx.doi.org/10.1088/0266-5611/29/12/125007 INPEEY 0266-5611 Google Scholar
T. PockA. Chambolle,
“Diagonal preconditioning for first order primal-dual algorithms in convex optimization,”
in Int. Conf. on Computer Vision (ICCV 2011),
1762
–1769
(2011). Google Scholar
E. Y. SidkyJ. H. JørgensenX. Pan,
“Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm,”
Phys. Med. Biol., 57
(10), 3065
–3091
(2012). http://dx.doi.org/10.1088/0031-9155/57/10/3065 PHMBA7 0031-9155 Google Scholar
E. Y. Sidkyet al.,
“Constrained minimization for enhanced exploitation of gradient sparsity: application to CT image reconstruction,”
J. Trans. Eng. Health Med., 2 1800418
(2014). http://dx.doi.org/10.1109/JTEHM.2014.2300862 Google Scholar
J. Duchiet al.,
“Efficient projections onto the -ball for learning in high dimensions,”
in Proc. of the 25th Int. Conf. on Machine learning,
272
–279
(2008). Google Scholar
E. Y. SidkyJ. S. JørgensenX. Pan,
“First-order convex feasibility algorithms for x-ray CT,”
Med. Phys., 40
(3), 031115
(2013). http://dx.doi.org/10.1118/1.4790698 MPHYA6 0094-2405 Google Scholar
I. ReiserR. M. Nishikawa,
“Task-based assessment of breast tomosynthesis: effect of acquisition parameters and quantum noise,”
Med. Phys., 37
(4), 1591
–1600
(2010). http://dx.doi.org/10.1118/1.3357288 MPHYA6 0094-2405 Google Scholar
J. S. JørgensenE. Y. SidkyX. Pan,
“Quantifying admissible undersampling for sparsity-exploiting iterative image reconstruction in x-ray CT,”
IEEE Trans. Med. Imaging, 32
(2), 460
–473
(2013). http://dx.doi.org/10.1109/TMI.2012.2230185 ITMID4 0278-0062 Google Scholar
F. Natterer, The Mathematics of Computerized Tomography, John Wiley & Sons, New York
(1986). Google Scholar
BiographyEmil Y. Sidky received the BS degree in physics and mathematics from the University of Wisconsin, Madison, in 1987 and the PhD degree in physics from the University of Chicago in 1993. He held academic positions in physics at the University of Copenhagen and Kansas State University. He joined the University of Chicago in 2001, where he is currently a research associate professor. His current interests are in CT image reconstruction and large-scale optimization. David N. Kraemer is currently pursuing an undergraduate degree in mathematics and computer science at Grinnell College, which he began last fall. He expects to graduate in 2017. His current work in medical physics has come through an REU through DePaul University and the University of Chicago, funded by the NSF. Erin G. Roth is completing her undergraduate degree in physics at Carleton College. During the summer of 2013, she worked as a NASA Illinois Space Grant student studying black holes at Northwestern University. At Carleton College, she has spent time researching the basic behavior of pulsars. This summer, as part of the DePaul MedIX REU program, she is working in the field of medical imaging at the University of Chicago. Christer Ullberg has been with XCounter since 1997 and prior to that has more than 10 years of professional experience in project management. Previously, he was responsible for all electronics design for space applications at ACR Electronic AB and worked as project manager of the multinational scientific balloon project PIROG. He is a world expert on CdTe photon counting, dual energy technology. Ingrid S. Reiser is an assistant professor of radiology at the University of Chicago in Chicago, Illinois. Her research interests include computer-aided detection and diagnosis methods for breast cancer in dedicated breast CT and digital breast tomosynthesis, as well as objective assessment of x-ray tomographic x-ray breast imaging systems. Xiaochuan Pan received his BS degree from Beijing University, Beijing, China, in 1982, his MS degree from the Institute of Physics, Chinese Academy of Sciences, Beijing, in 1986, and the master’s and PhD degrees from the University of Chicago in 1988 and 1991, respectively, all in physics. He is currently a professor with the Department of Radiology, University of Chicago. His research interests include physics, algorithms, and applications of tomographic imaging. |