KEYWORDS: Modulation transfer functions, CT reconstruction, Computed tomography, Image quality, Computer simulations, Data acquisition, Deep learning, Sensors, Medical image reconstruction
As Deep Learning Reconstruction (DLR) begins to dominate computed tomography (CT) reconstruction, performance evaluation via conventional phantoms with uniform backgrounds and specific sizes may benefit from augmentation with simulated controlled test objects inserted into anatomical backgrounds. The purpose of this study is to validate a simulation tool with physics-based image quality metrics both in phantom and in patient data. An analytic forward projection tool, based on detector and source geometry with beam spectra was designed to match the specifications of Canon Medical’s Aquilion ONE Prism. The CatphanTM 500 and two water phantoms, 24cm and 32cm in diameter, were scanned with Aquilion ONE Prism at various mA levels and reconstructed with FBP. Corresponding simulated images were generated. The CT number, noise power spectrum (NPS) and modulation transfer function (MTF) were evaluated and compared between the simulated images and actual images. Simulated projection data of a CatphanTM sensitometry cylinder was also combined with a patient sinogram and reconstructed with a variety of kernels. The MTF of three different contrast rods were measured and compared with the MTF measured from CatphanTM. The CT numbers were equivalent between the simulated data and the image acquired from the actual CT system. The MTF measured from the simulated data of both phantom and patient image matched with the MTF from CatphanTM. The noise properties of the simulated data also aligned with the NPS of the 24cm and 32cm water phantom image. The simulation tool was able to generate images with image quality equivalent to the images scanned and reconstructed from the actual CT system. With this validation study, the simulation will be utilized to further evaluate the performance of deep learning reconstructions (DLRs).
In conventional CT, it is difficult to generate consistent organ specific noise and resolution with a single reconstruction kernel. Therefore, it is necessary in principle to reconstruct a single scan multiple times using different kernels in order to obtain clinical diagnosis information for different anatomies. In this paper, we provide a deep learning solution which can obtain organ specific noise and resolution balance with one single reconstruction. We propose image reconstruction using a deep convolution neural network (DCNN) trained by a specific feature aware reconstruction target. It integrates desirable features from multiple reconstructions each of which provides optimal noise and resolution tradeoff for one specific anatomy. The performance of our proposed method has been verified with actual clinical data. The results show that our method can outperform standard model based iterative reconstruction (MBIR) by offering consistent noise and resolution properties across different organs using only one single image reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.