Reflection of photoacoustic (PA) signals from strong acoustic heterogeneities in biological tissue leads to reflection artifacts (RAs) in B-mode PA images. In practice, RAs often clutter clinically obtained PA images, making the interpretation of these images difficult in the presence of hypoechoic or anechoic biological structures. Towards PA artifact removal, several researchers have exploited 1) the frequency/spectrum content of time-series photoacoustic data in order to separate the true signal from artifacts, and 2) the multi-wavelength response of photoacoustic targets, assuming that the spectral nature of RAs correlates well with their corresponding source signals. These approaches are limited to extensive offline processing and sometimes fail to correctly identify artifacts in deep tissue. This study demonstrates the use of a deep neural network with the U-Net architecture to detect and reduce RAs in B-mode PA images. In order to train the proposed deep learning model for the RA reduction task, a program is designed to randomly generate anatomically realistic digital phantoms of human fingers with the capacity to produce RAs when subjected to PA imaging. In-silico PA imaging experiments, modeling photon transport and acoustic wave propagation, on these digital finger phantoms enabled the generation of 1800 training samples. The algorithm was tested on both PA images generated from digital phantoms and in-vivo PA data acquired from human fingers using a hand-held LED-based PA imaging system. Our results suggest that robust reduction of RAs with a deep neural network is possible if the network is trained with sufficiently realistic simulated images.
In photoacoustic imaging, accurate spectral unmixing is required for revealing functional and molecular information of the tissue using multispectral photoacoustic imaging data. A significant challenge in deep-tissue photoacoustic imaging is the nonlinear dependence of the received photoacoustic signals on the local optical fluence and molecular distribution. To overcome this, we have deployed an end-to-end unsupervised neural network based on autoencoders. The proposed method employs the physical properties as the constraints to the neural network which effectively performs the unmixing and outputs the individual molecular concentration maps without a-priori knowledge of their absorption spectra. The algorithm is tested on a set of simulated multispectral photoacoustic images comprising of oxyhemoglobin, deoxy-hemoglobin and indocyanine green targets embedded inside a tissue mimicking medium. These in silico experiments demonstrated promising photoacoustic spectral unmixing results using a completely unsupervised deep learning approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.