|
1.IntroductionPhotoacoustic tomography (PAT), an emerging biomedical imaging technique, performs deep tissue imaging with very high optical absorption sensitivity.1 Generally, biological tissue highly scatters light. Multiple scattering events cause photons to deviate from their original propagation direction, which impedes high-resolution optical imaging in deep tissue. By converting highly scattered photons into ultrasonic waves, which are about 3 orders of magnitude less scattered than light, PAT can break the optical diffusion limit (1-mm deep in biological tissue) and form high-resolution images of the object’s optical properties at depths.2 PAT has two major incarnations: focused-scanning-based photoacoustic microscopy (PAM) and reconstruction-based photoacoustic computed tomography (PACT).3 Transducer arrays with multiple elements are widely used for PACT, greatly improving the imaging speed over that achieved by scanning with a single-element transducer.4–6 Although linear arrays are widely used due to their advantageous hand-held operation,7–10 they provide only a limited acoustic view,11 which reduces image fidelity of PACT.12,13 In contrast, curved transducer arrays, especially full-ring transducer arrays, have much broader acoustic detection coverage. Since full-ring transducer array-based PACT (FR-PACT) has full-view in-plane coverage, it eliminates the limited view problem in the imaging plane and provides small-animal brain and body images with detailed structures and two-dimensional (2-D) full-view fidelity.14–16 However, all transducer elements have limited bandwidth with low-frequency cutoffs.17 Although the full-ring geometry provides in-plane detection coverage, the elevational acceptance is relatively small, determined by the acoustic aperture [normally, the acoustic numerical aperture (NA) is ]. Consequently, the reconstructed images present bipolar (i.e., both positive and negative) pixel values. However, PAT ideally should image the optical energy deposition (which is nonnegative); thus, bipolar values are artificial and cause ambiguities in interpreting images. For example, both positive and negative peaks mean high optical absorption, which is counter-intuitive for biologists and physicians seeking to understand the image. Moreover, bipolar pixel values pose difficulties in quantifying physiological parameters, such as mapping the distribution of blood oxygen saturation () and the metabolic rate of oxygen (). To mitigate the bipolar issue, multiple solutions have been reported. One solution is to keep only the positive values and threshold the negative values to zero, but this removes useful structures and induces artifacts.18–20 A second solution is to deconvolve the raw channel data with its corresponding transducer element’s electrical impulse response to retrieve the broadband photoacoustic signals. However, in addition to relying on a high signal-to-noise ratio (SNR), deconvolution does not solve the limited elevational acoustic coverage issue and, thus, cannot provide unipolar reconstructed images for FR-PACT. A third solution is to employ iteration-based image reconstruction with a nonnegativity constraint,21–23 although this requires accurate modeling of the imaging system and time-consuming computation. In addition to these three methods, Hilbert transformation is widely used in PAM and linear array-based PACT to address bipolarity and extract envelope information,12,24–26 and it has been proven to be simple and computationally effective. When applying Hilbert transformation on a 2-D matrix, it is critically important to select the correct transformation direction,27 normally along the acoustic propagation direction for photoacoustic (PA) image processing. Otherwise, unpredictable artifacts may result. In focused-transducer-based PAM, Hilbert transformation is usually taken along the A-line direction (the acoustic receiving direction), and then the absolute value of the transformed A-line is taken to produce its envelope.26,28,29 In linear-array-based PACT, a common choice for the Hilbert transformation direction is the array receiving direction (the depth direction) in the reconstructed images, taking the absolute value extracts the envelope information of images.12,30 However, in FR-PACT, the acoustic signals are received from all directions, up to an angle of in plane, which makes the determination of the transformation direction difficult. To address this issue, we explored multiple ways of implementing Hilbert transformation in FR-PACT (in the Appendix). Finally, we propose a multiview Hilbert transformation (MVHT) that satisfactorily corrects the bipolarity in FR-PACT with both minimum artifacts in the reconstructed images and maximum image contrasts. 2.Materials and MethodsHere, we present an MVHT algorithm for FR-PACT that converts the reconstructed bipolar images to unipolar images representing the initial pressures. Figure 1(a) shows the setup of the FR-PACT system for small-animal whole-body imaging, which has been reported earlier.4,31–33 A ring-shaped laser beam (750-nm, 10-Hz repetition rate) is used for whole-body imaging illumination. The maximum light fluence on the skin of the animal is , which is well below the American National Standards Institute safety limit. The PA signals are detected by a full-ring transducer array (Imasonic, 5-cm diameter, 512 elements, 5-MHz central frequency, and one-way bandwidth). Each element (10-mm height, 0.3-mm pitch, and 0.1-mm interelement space) is cylindrically focused to produce an elevational focal distance of 19.8 mm (acoustic NA, 0.25). The data acquisition system has 64 channels with eightfold multiplexing. Previously, several inversion methods have been proposed to reconstruct images for the full-ring geometry.34–36 Figure 1(b) shows representative bipolar images acquired by FR-PACT using the conventional universal back-projection (UBP) reconstruction.34 A schematic of the image reconstruction is shown in Fig. 1(c), illustrating the process of implementing MVHT in FR-PACT. We first select a group of neighboring elements at one view angle , all of which can be approximately regarded to have the same acoustic receiving direction due to the small angular coverage of those elements. Then, the channel data from the selected elements are used for reconstruction. Elements on the opposite side, sharing the same acoustic receiving axis, are also included because they constructively contribute to the reconstruction process. Two coordinate systems, sharing the same origin in the center of the ring, are used here: the local coordinates attached to the locally selected elements and rotating with the view angle and the global coordinates attached to the full-ring array. The raw channel data from the selected neighboring elements are used for local image reconstruction under the local coordinates, using the UBP algorithm where , is the acoustic pressure detected by the selected elements located at and time and is the speed of sound in tissue. is the solid angle for one element with respect to the point at , where the solid angle is subtended by the detection aperture of the selected elements with respect to the point at , and is a weighting factor contributing to the construction from the element located at . is the locally reconstructed initial PA pressure for the point at , which is bipolar due to the limited detection bandwidth and acoustic receiving aperture. Next, we take Hilbert transformation of the reconstructed image at the local coordinates along the acoustic receiving direction [as the red arrows shown in Fig. 3(c)] and then take the absolute value of transformed image. Rotating the processed image by an angle of transfers it to global coordinates.By repeating the above procedures for all view angles and pixel-wise averaging over all the images, the unipolar full-view image is obtained. The whole process can be mathematically expressed as where is the final full-view unipolar image with nonnegative pixel values, is the total number of view angles, is the Hilbert transformation operator, which takes the Hilbert transformation of the partially reconstructed image along the acoustic receiving direction, and is the absolute value operator, which takes the absolute value of the Hilbert transformed image. The combination of the and operators extracts the envelope image from the partially reconstructed image. is the rotation operator, which rotates the envelope image by an angle of to align it in the global coordinates. The final image is the average of all aligned envelope images.An important parameter of this method is the number of elements to be bundled in one single-view group. Interestingly, we found that the optimal number of elements at each view is related to the reconstructed field of view (FOV). In our experiment, the diameter of the full-ring array is 50 mm. When the FOV is 16 mm in diameter, the corresponding angle [Fig. 1(c)] is about 37 deg. Thus, the number of elements that fall into the range of the red arc (one side) should be 54, given that the total number of elements of full-ring array is 512. We rotated 12 views () with a step size of 15 deg to complete the full-view reconstruction. To validate our prediction, we conducted a numerical simulation using a leaf skeleton as the object, as shown in Fig. 2(a). The conventional UBP reconstructed image with bipolar pixel values is shown in Fig. 2(b). The forward data of 2-D wave propagation were generated using the -Wave toolbox.37 In the simulation, the central frequency of the simulated detector was set to 5 MHz, with 100% bandwidth. We employed 512 ideal point detectors to form a ring shape with a radius of 2.5 cm. As shown in Figs. 2(c) and 2(d), the reconstructed image using 54 elements at each view has higher signal amplitude and better contrast [Fig. 2(e)] than the reconstructed image using 27 elements at each view. However, when further increasing the number of elements at each view to 108 and 216, artifacts appear in the reconstructed images, as shown in Fig. 3. When the number of elements at each view exceeds the optimum number, the acoustic receiving angle becomes so large that the direction of the Hilbert transformation is no longer aligned with the signal-receiving direction. We also note that the reconstructed images [Figs. 3(b)–3(d)] have similar signal amplitudes, which means that increasing the number of elements beyond 54 does not provide a higher SNR. Therefore, given an FOV of 16 mm in diameter, the optimal number of elements in each view should be 54, the same as predicted above. 3.ResultsWe first quantified the in-plane image resolution of the FR-PACT system using MVHT for reconstruction. Microspheres with in diameter were imbedded in agar gel (3% mass concentration, dissolved in deionized water) and imaged by FR-PACT. The MVHT reconstructed image of one microsphere is shown in Fig. 4(a), and the full width at half maximum after Gaussian fitting was calculated to be about [Fig. 4(b)]. The in-plane resolution of the UBP reconstructed bipolar images is [Figs. 4(c)–4(e)]. The in-plane resolution of the MVHT reconstructed unipolar images is slightly worse than the resolution of bipolar images because the envelope extraction process acts as a low-pass filter in the spatial frequency domain. To further demonstrate the effectiveness of the MVHT method, we imaged both a mouse brain ex vivo and a mouse trunk in vivo. All experimental procedures were carried out in conformity with laboratory animal protocols approved by the Animal Studies Committees of Washington University in St. Louis. A saline-perfused mouse brain was first imbedded in agar gel and then imaged by FR-PACT with 620-nm illumination. The conventional UBP reconstructed bipolar image is shown in Fig. 5(a), while the MVHT reconstructed image is shown in Fig. 5(b). Features of the two images match well with each other. A magnetic resonance microscopy image of a similar mouse brain with its structural segmentation superimposed as colored lines is shown in Fig. 5(c), to serve as a reference for validation of our results. We also demonstrated the performance of the MVHT reconstruction method by imaging in vivo the trunk of an 8-week-old nude mouse (Hsd: Athymic Nude-FoxlNU, Harlan Co., 20- to 30-g body weight). Ring-shape side illumination at 750 nm was used for excitation. In the conventional UBP reconstructed bipolar image shown in Fig. 6(a), most of the internal organs, such as the two kidneys, spleen, gastrointestinal tract, and spinal cord, are resolved. The MVHT reconstructed unipolar image [Fig. 6(b)] maintains all of those features. 4.DiscussionDue to limited acoustic bandwidth and elevational acceptance of the full-ring transducer array, the reconstructed images using the UBP method contain both positive and negative values. It is counter-intuitive for physicians and biologists to interpret such images because both positive and negative peaks represent strong optical absorption. PAT should ideally report the initial pressure or optical energy deposition, proportional to the product of the local fluence and optical absorption. Thus, the pixel values in perfectly reconstructed PAT images should be nonnegative. To solve the bipolarity problem in UBP reconstructed images, we proposed the MVHT method for a full-ring geometry PACT system. MVHT reconstruction successfully recovers the unipolar initial pressure with an in-plane resolution of 148 μm, which is slightly worse than that of the UBP reconstruction due to the spatial frequency low-pass nature of the Hilbert transformation. We also optimized the number of elements for each single-view reconstruction to get the best SNR without inducing artifacts. The performance of the MVHT method was demonstrated by numerical simulation, ex vivo imaging of a mouse brain, and in vivo whole-body imaging. MVHT provides a computationally efficient way to recover the unipolar initial pressure map from bandwidth- and elevational-acoustic-coverage-limited PA measurements. AppendicesAppendixHere, we compare and analyze three variant approaches to applying Hilbert transformation for full-ring geometry-based PACT, as illustrated in Figs. 7(a)–7(c). The first variant [Fig. 7(a)] is to directly apply Hilbert transformation to [in Eq. (1)] and take the envelope and then reconstruct the image using the enveloped data. The second variant [Fig. 7(b)] is to select a group of neighboring elements (54 elements for each view angle) for reconstruction, take the Hilbert transformation only along the centerline of the reconstructed image, and repeat this procedure for all of the angles to complete the reconstruction. The third variant [Fig. 7(c)] is the method presented in this paper. A simple object, as shown in Fig. 7(d), was the input for the numerical simulation. The forward process was simulated using the -Wave toolbox. The reconstructed images of the three variants are shown in Figs. 7(e)–7(g). The first and second variants result in obvious reconstruction artifacts [Figs. 7(e) and 7(f)], but the proposed MVHT method successfully recovers the input without inducing artifacts [Fig. 7(g)]. The first method creates image artifacts because directly enveloping the channel data removes the phase information of the detected acoustic signal. The second method envelopes the centerlines of the partially reconstructed images, which loses most of the useful information and, thus, induces reconstruction artifacts. DisclosuresL.V.W. has financial interests in Microphotoacoustics, Inc., which however did not support this work. AcknowledgmentsThe authors appreciate the close reading of the paper by Professor James Ballard. We also thank Guo Li and Jun Xia for technical support and helpful discussions. This work was sponsored by the National Institutes of Health Grants DP1 EB016986 (NIH Director’s Pioneer Award), R01 CA186567 (NIH Director’s Transformative Research Award), R01 EB016963, U01 NS090579 (NIH BRAIN Initiative), and U01 NS099717 (NIH BRAIN Initiative). ReferencesL. V. Wang and J. Yao,
“A practical guide to photoacoustic tomography in the life sciences,”
Nat. Methods, 13 627
–638
(2016). http://dx.doi.org/10.1038/nmeth.3925 Google Scholar
L. H. V. Wang and S. Hu,
“Photoacoustic tomography: in vivo imaging from organelles to organs,”
Science, 335 1458
–1462
(2012). http://dx.doi.org/10.1126/science.1216210 SCIEAS 0036-8075 Google Scholar
L. Li, J. Yao, L. V. Wang,
“Photoacoustic tomography enhanced by nanoparticles,”
Wiley Encyclopedia of Electrical and Electronics Engineering, 1
–14 John Wiley & Sons, Inc., Hoboken, New Jersey
(2016). Google Scholar
J. Yao et al.,
“Multiscale photoacoustic tomography using reversibly switchable bacterial phytochrome as a near-infrared photochromic probe,”
Nat. Methods, 13 67
–73
(2016). http://dx.doi.org/10.1038/nmeth.3656 Google Scholar
D. Razansky, A. Buehler and V. Ntziachristos,
“Volumetric real-time multispectral optoacoustic tomography of biomarkers,”
Nat. Protoc., 6 1121
–1129
(2011). http://dx.doi.org/10.1038/nprot.2011.351 1754-2189 Google Scholar
X. L. Deán-Ben, S. J. Ford and D. Razansky,
“High-frame rate four dimensional optoacoustic tomography enables visualization of cardiovascular dynamics and mouse heart perfusion,”
Sci. Rep., 5 10133
(2015). http://dx.doi.org/10.1038/srep10133 Google Scholar
H. Zafar et al.,
“Linear-array-based photoacoustic imaging of human microcirculation with a range of high frequency transducer probes,”
J. Biomed. Opt., 20 051021
(2014). http://dx.doi.org/10.1117/1.JBO.20.5.051021 Google Scholar
Y. Wang et al.,
“Second generation slit-based photoacoustic tomography system for vascular imaging in human,”
J. Biophotonics,
(2016). http://dx.doi.org/10.1002/jbio.201600151 Google Scholar
H. M. Heres et al.,
“Visualization of vasculature using a hand-held photoacoustic probe: phantom and in vivo validation,”
J. Biomed. Opt., 22 041013
(2017). http://dx.doi.org/10.1117/1.JBO.22.4.041013 Google Scholar
R. A. Kruger et al.,
“Thermoacoustic computed tomography using a conventional linear transducer array,”
Med. Phys., 30 856
–860
(2003). http://dx.doi.org/10.1118/1.1565340 Google Scholar
Y. Xu et al.,
“Reconstructions in limited-view thermoacoustic tomography,”
Med. Phys., 31 724
–733
(2004). http://dx.doi.org/10.1118/1.1644531 Google Scholar
G. Li et al.,
“Multiview Hilbert transformation for full-view photoacoustic computed tomography using a linear array,”
J. Biomed. Opt., 20 066010
(2015). http://dx.doi.org/10.1117/1.JBO.20.6.066010 Google Scholar
D. Yang et al.,
“Fast full-view photoacoustic imaging by combined scanning with a linear transducer array,”
Opt. Express, 15 15566
–15575
(2007). http://dx.doi.org/10.1364/OE.15.015566 Google Scholar
L. Li et al.,
“Label-free photoacoustic tomography of whole mouse brain structures ex vivo,”
Neurophotonics, 3 035001
(2016). http://dx.doi.org/10.1117/1.NPh.3.3.035001 Google Scholar
E. Merčep et al.,
“Whole-body live mouse imaging by hybrid reflection-mode ultrasound and optoacoustic tomography,”
Opt. Lett., 40 4643
–4646
(2015). http://dx.doi.org/10.1364/OL.40.004643 Google Scholar
L. Li et al.,
“Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,”
Nat. Biomed. Eng., 1 0071
(2017). http://dx.doi.org/10.1038/s41551-017-0071 Google Scholar
A. A. Oraevsky and A. A. Karabutov,
“Ultimate sensitivity of time-resolved optoacoustic detection,”
Proc. SPIE, 3916 228
(2000). http://dx.doi.org/10.1117/12.386326 Google Scholar
J. Gateau et al.,
“Three-dimensional optoacoustic tomography using a conventional ultrasound linear detector array: whole-body tomographic system for small animals,”
Med. Phys., 40 013302
(2013). http://dx.doi.org/10.1118/1.4770292 Google Scholar
J. Gateau, A. Chekkoury and V. Ntziachristos,
“High-resolution optoacoustic mesoscopy with a 24 MHz multidetector translate-rotate scanner,”
J. Biomed. Opt., 18 106005
(2013). http://dx.doi.org/10.1117/1.JBO.18.10.106005 Google Scholar
J. Gateau, A. Chekkoury and V. Ntziachristos,
“Ultra-wideband three-dimensional optoacoustic tomography,”
Opt. Lett., 38 4671
–4674
(2013). http://dx.doi.org/10.1364/OL.38.004671 Google Scholar
C. Huang et al.,
“Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media,”
IEEE Trans. Med. Imaging, 32 1097
–1110
(2013). http://dx.doi.org/10.1109/TMI.2013.2254496 ITMID4 0278-0062 Google Scholar
J. Zhang et al.,
“Effects of different imaging models on least-squares image reconstruction accuracy in photoacoustic tomography,”
IEEE Trans. Med. Imaging, 28 1781
–1790
(2009). http://dx.doi.org/10.1109/TMI.2009.2024082 ITMID4 0278-0062 Google Scholar
R. Ellwood et al.,
“Orthogonal Fabry-Pérot sensor array system for minimal-artifact photoacoustic tomography,”
Proc. SPIE, 9323 932312
(2015). http://dx.doi.org/10.1117/12.2081934 Google Scholar
J. Yao et al.,
“High-speed label-free functional photoacoustic microscopy of mouse brain in action,”
Nat. Methods, 12 407
(2015). http://dx.doi.org/10.1038/nmeth.3336 Google Scholar
L. Li et al.,
“Fully motorized optical-resolution photoacoustic microscopy,”
Opt. Lett., 39 2117
–2120
(2014). http://dx.doi.org/10.1364/OL.39.002117 Google Scholar
C. Zhang et al.,
“In vivo photoacoustic microscopy with 7.6-μm axial resolution using a commercial 125-MHz ultrasonic transducer,”
J. Biomed. Opt., 17 116016
(2012). http://dx.doi.org/10.1117/1.JBO.17.11.116016 Google Scholar
M. Lacey and X. Li,
“Maximal theorems for the directional Hilbert transform on the plane,”
Trans. Am. Math. Soc., 358 4099
–4118
(2006). http://dx.doi.org/10.1090/S0002-9947-06-03869-4 Google Scholar
Y. S. Zhang et al.,
“Optical-resolution photoacoustic microscopy for volumetric and spectral analysis of histological and immunochemical samples,”
Angew. Chem. Int. Ed., 53 8099
–8103
(2014). http://dx.doi.org/10.1002/anie.201403812 Google Scholar
L. Zhu et al.,
“Multiview optical resolution photoacoustic microscopy,”
Optica, 1 217
–222
(2014). http://dx.doi.org/10.1364/OPTICA.1.000217 Google Scholar
P. Zhang et al.,
“High-resolution deep functional imaging of the whole mouse brain by photoacoustic computed tomography in vivo,”
J. Biophotonics,
(2017). http://dx.doi.org/10.1002/jbio.201700024 Google Scholar
J. Xia et al.,
“Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo,”
J. Biomed. Opt., 17 050506
(2012). http://dx.doi.org/10.1117/1.JBO.17.5.050506 Google Scholar
L. Lin et al.,
“In vivo deep brain imaging of rats using oral-cavity illuminated photoacoustic computed tomography,”
J. Biomed. Opt., 20 016019
(2015). http://dx.doi.org/10.1117/1.JBO.20.1.016019 Google Scholar
L. Lin et al.,
“In vivo photoacoustic tomography of myoglobin oxygen saturation,”
J. Biomed. Opt., 21 061002
(2016). http://dx.doi.org/10.1117/1.JBO.21.6.061002 Google Scholar
M. Xu and L. V. Wang,
“Universal back-projection algorithm for photoacoustic computed tomography,”
Phys. Rev. E, 71 016706
(2005). http://dx.doi.org/10.1103/PhysRevE.71.016706 Google Scholar
L. A. Kunyansky,
“Explicit inversion formulae for the spherical mean Radon transform,”
Inverse Probl., 23 373
–383
(2007). http://dx.doi.org/10.1088/0266-5611/23/1/021 INPEEY 0266-5611 Google Scholar
M. Haltmeier,
“Inversion of circular means and the wave equation on convex planar domains,”
Comput. Math. Appl., 65 1025
–1036
(2013). http://dx.doi.org/10.1016/j.camwa.2013.01.036 Google Scholar
B. E. Treeby and B. T. Cox,
“k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,”
J. Biomed. Opt., 15 021314
(2010). http://dx.doi.org/10.1117/1.3360308 Google Scholar
Y. Ma et al.,
“In vivo 3D digital atlas database of the adult C57BL/6J mouse brain by magnetic resonance microscopy,”
Front. Neuroanat., 2 1
(2008). http://dx.doi.org/10.3389/neuro.05.001.2008 Google Scholar
BiographyLei Li received his BS and MS degrees from Harbin Institute of Technology, China, in 2010 and 2012, respectively. He is working as a graduate research assistant under the tutelage of Dr. Lihong Wang at Washington University. His current research focuses on photoacoustic microscopy and computed tomography, especially improving the photoacoustic small-animal whole-body imaging performance and applying it on functional brain imaging. Liren Zhu is currently a graduate student in biomedical engineering at Washington University in St. Louis under the supervision of Lihong V. Wang. His research focuses on the development of innovative optical imaging, photoacoustic imaging, and ultrasonic imaging techniques for biomedical research. Yuecheng Shen received his BSc degree in applied physics from the University of Science and Technology of China and his PhD in electrical engineering from Washington University in St. Louis. He is currently working as a postdoctoral research associate in medical engineering at California Institute of Technology. His current research interests focus on developing new optical techniques based on stepwise wavefront shaping and optical time reversal. Lihong V. Wang earned his PhD from Rice University, Houston, Texas. He currently holds the Bren Professorship of medical engineering and electrical engineering at the California Institute of Technology. He has published 470 peer-reviewed articles in journals and has delivered 433 keynote, plenary, or invited talks. His Google scholar h-index and citations have reached 111 and 51,000, respectively. |