Open Access
28 December 2023 Photoacoustic imaging plus X: a review
Daohuai Jiang, Luyao Zhu, Shangqing Tong, Yuting Shen, Feng Gao, Fei Gao
Author Affiliations +
Abstract

Significance

Photoacoustic (PA) imaging (PAI) represents an emerging modality within the realm of biomedical imaging technology. It seamlessly blends the wealth of optical contrast with the remarkable depth of penetration offered by ultrasound. These distinctive features of PAI hold tremendous potential for various applications, including early cancer detection, functional imaging, hybrid imaging, monitoring ablation therapy, and providing guidance during surgical procedures. The synergy between PAI and other cutting-edge technologies not only enhances its capabilities but also propels it toward broader clinical applicability.

Aim

The integration of PAI with advanced technology for PA signal detection, signal processing, image reconstruction, hybrid imaging, and clinical applications has significantly bolstered the capabilities of PAI. This review endeavor contributes to a deeper comprehension of how the synergy between PAI and other advanced technologies can lead to improved applications.

Approach

An examination of the evolving research frontiers in PAI, integrated with other advanced technologies, reveals six key categories named “PAI plus X.” These categories encompass a range of topics, including but not limited to PAI plus treatment, PAI plus circuits design, PAI plus accurate positioning system, PAI plus fast scanning systems, PAI plus ultrasound sensors, PAI plus advanced laser sources, PAI plus deep learning, and PAI plus other imaging modalities.

Results

After conducting a comprehensive review of the existing literature and research on PAI integrated with other technologies, various proposals have emerged to advance the development of PAI plus X. These proposals aim to enhance system hardware, improve imaging quality, and address clinical challenges effectively.

Conclusions

The progression of innovative and sophisticated approaches within each category of PAI plus X is positioned to drive significant advancements in both the development of PAI technology and its clinical applications. Furthermore, PAI not only has the potential to integrate with the above-mentioned technologies but also to broaden its applications even further.

1.

Introduction

Photoacoustic (PA) imaging (PAI) is a burgeoning modality within the realm of biomedical imaging that harnesses the dual benefits of rich optical contrast and high spatial resolution in deep scattering tissue, overcoming the optical diffusion limit by converting light to sound.1 PAI relies on the PA effect, facilitating the discernment of signals from within deep tissue layers. As shown in Fig. 1(a), a typical PAI system include a nanosecond pulse laser source for light illumination and ultrasound transducer for PA signal detection. In addition, a low-noise-preamplifier and a data acquisition (DAQ) device are used for PA signal amplification and conversion into a digital signal for post-processing and display.

Fig. 1

(a) Principle of PAI. (b) Overview of PAI plus X.

JBO_29_S1_S11513_f001.png

Over the past two decades, the different modalities of PAI systems have undergone rapid advancement, exemplified by PA tomography (PAT), PA microscopy (PAM), and PA endoscopy (PAE)2. Owing to its multi-wavelength optical absorption contrast, PAI effectively detects both intrinsic and extrinsic optical absorbers, thus cementing its position as a powerful tool for molecular and functional imaging.3 From a technical perspective, PAI is being empowered by or empowers other advanced technologies, to achieve a lower cost, deeper penetration, faster speed, treatment monitoring, intelligent diagnostics, and more. We name such an innovation strategy PAI plus X.

In this review, we summarize the recent progress on several important aspects of PAI plus X, as shown in Fig. 1(b): (1) PAI plus treatment, such as PAI-guided laser/ultrasound/radio-frequency (RF) therapy and surgery, etc.; (2) PAI plus advance electrical and mechanical hardware, such as analog/digital circuit, electromagnetic/optical positioning system, mechanical scanning system, etc.; (3) PAI plus advanced laser sources, such as adaptive/diffractive optics enabled needle-shape beam, low-cost compact laser diode (LD) and light-emitting diode (LED), high-repetition rate laser sources, etc.; (4) PAI plus advanced ultrasound sensors, such as expanding bandwidth by new materials, flexible substrate for wearable sensors, miniaturized capacitive micromachined ultrasonic transducer (CMUT)/piezoelectric micromachined ultrasonic transducer (PMUT) sensors, noncontact light-based acoustic sensors, etc.; (5) PAI plus deep learning, such as for image reconstruction and enhancement, motion correction and denoising, image analysis and quantification, etc.; and (6) PAI plus other imaging modalities, including multimodal PA/ultrasound/MRI/Optical imaging, PA-generated ultrasound imaging, ultrasound-assisted PA image reconstruction, etc. Overall, we aim to provide a comprehensive overview of recent advancements in the field of PAI plus X, specifically within the last 3 years.

2.

PAI Plus Treatment

2.1.

PAI-Guided Ablation

Advancements in ablation methods, such as laser and RF ablation, have improved treatments for neural oncology, cardiac arrhythmias, and varicose veins. Ablation offers minimally invasive therapeutic options for conditions such as cancer and arrhythmias. However, immediate procedural assessment and real-time feedback still remains lacking. PAI is emerging as a solution, potentially providing real-time insights into ablation-induced necrosis dimensions and temperature changes during ablation procedures.

Mohammad et al.4 proposed an integrated PAI-guided laser ablation intracardiac theranostic system that provides real-time tissue ablation, lesion monitoring, and tissue distinguishing capabilities. The system, as shown in Fig. 2(a), offers a low-cost and safer approach for potentially minimizing complications and enhancing treatment procedures. Sun et al.10,11 developed a multi-wavelength PA temperature feedback photothermal therapy (PTT) system for accurate and safe tumor treatment. Real-time temperature control within the target area achieves 0.56°C and 0.68°C accuracy, highlighting its strong application potential. Silva et al.12 presented a multiphysical numerical study of a PTT performed on a numerical phantom of a mouse head containing a glioblastoma by PA temperature monitoring. Yang et al.13 developed a non-invasive and high-resolution imaging tool called wavelength-switchable PAM to guide PTT by mapping tumor microvasculature and nanoparticle accumulation. PAM visualizes tumor microvasculature, guiding PTT implementation and efficacy evaluation.

Fig. 2

PAI plus treatment. (a) A schematic and photograph of miniaturized integrated US/PA-guided laser ablation theranostic system.4 (b) PA imaging result of the ablated swine liver sample in Ref. 5. (c) PA guided US therapy with optimal benefits.6 Yellow: wavefronts of PA waves sensed by the transducers, and blue: wavefronts of US transmitted by the transducers. PA: PA and US: ultrasound. (d) Schematic of the tri-modal system for HIFU therapy.7 (e) Schematic of PA transmitter probe by Yu et al.8 (f) The overall system architecture of TRUS + PA image-guided surgical guidance system.9

JBO_29_S1_S11513_f002.png

To distinguish ablated tissue from non-ablated tissue based on their spectrum difference, Zhang et al.5 used PAI for real-time visual feedback on tissue ablation. They distinguish ablated from non-ablated tissue through spectral differences, mapping ablation extent and lesion distribution growth [as shown in Fig. 2(b)]. Beck et al.14 investigated the safety of using PAI for liver surgeries with a 750 nm laser wavelength and 30  mJ laser energy and proposed a new protocol for studying laser-related liver damage.

For ultrasound therapy, Xiang et al.6 suggested a hybrid method for treating port wine birthmarks using PAI-guided ultrasound, as shown in Fig. 2(c). This combines both modalities to target deeper capillaries while minimizing adjacent tissue heating. An array-based high-intensity focused ultrasound (HIFU) therapy system [Fig. 2(d)] integrated with real-time ultrasound and PAI was developed by Wang et al.7 The system can accurately target the treatment spot, flexibly and automatically adjust the focal point in the treatment region, and monitor therapy progress in real-time using photoacoustic (PA)/ultrasound (US) dual-modal imaging.

2.2.

PAI-Enabled Ultrasound Treatment

Neuromodulation is important for understanding the nervous system and treating neurological and psychiatric disorders. Different techniques such as deep brain stimulation, transcranial magnetic stimulation, and electrical stimulation have been used for various conditions. The PA technique uses a pulsed laser to generate high-intensity ultrashort ultrasound pulses, allowing for high-resolution imaging of biological structures with potential applications in neuromodulation, nanomedicine delivery, ultrasound-encoded optical focus, and large-volume ultrasound tomography.

Yu et al. proposed several kinds of PA transmitters to generate acoustic waves, such as dynamic acoustic focusing PA transmitter8,15 [as shown in Fig. 2(e)], flat PA source based ultrasound transmitter,16 and one based on a binary amplitude switch control of the PA transducer toward dynamic spatial acoustic field modulation.17 Jezerresk et al. proposed an ultrasonic PA emitter by a graphene-nanocomposites film on a flexible substrate.18 The proposed strategies provide effective methods for dynamically manipulating the acoustic field in PA transmitters, which can have significant applications in various fields. Du et al. presented the development of a candle soot fiber optoacoustic emitter (CSFOE) that can generate high pressure of over 10 MPa with a central frequency of 12.8 MHz, enabling highly efficient neuromodulation in vitro.19 The CSFOE can perform dual-site optoacoustic activation of neurons, confirmed by calcium (Ca2+) imaging; it opens potential avenues for more complex and programmed control in neural circuits using a simple design for multisite neuromodulation in vivo. Precise drug delivery is important for internal organs. Xi et al.20 developed a dual-wavelength PA laparoscope for nanomedicine delivery, and it shows that the optical-resolution PAM (OR-PAM) based precise drug delivery method is promising for the effective treatment of internal organ diseases. Optical imaging is limited by scattering and has a great challenge for deep tissue imaging, Zhang et al. proposed the wavefront shaping method based on time-reversed ultrasonically encoded optical focusing by a PA wave,21 achieving dynamic focusing of light into both optical and acoustic heterogeneous scattering medium, which shows high potential for transcranial light focusing. PA wave has a broad bandwidth, and it can potentially be used for ultrasound imaging. Manohar et al. proposed laser-induced ultrasound transmitters (LIUS) for large-volume ultrasound tomography.22 The LIUS transmitters produced a center frequency of 0.94 MHz with a bandwidth from 0.17 to 2.05 MHz, producing pressures between 180.17 and 24.35 kPa for a range of depths between 7.42 and 62.25 mm.

2.3.

PAI-Guided Surgery

Surgery requires multimodal medical imaging information to improve the efficiency and success rate, which inspires the motivation to develop PAI-guided surgery. These endeavors encompass diverse areas, including PAI-guided surgical procedure, PAI-guided accurate biopsies, and tissue characterization.

Zhu et al.23 introduced the application of PAI for surgical navigation in spinal surgery procedures. Through a combination of theoretical analysis and experimental verification, the authors demonstrated the feasibility of this approach. For the real-time surgical guidance, Boctor et al.9 presented a real-time intraoperative surgical guidance system employing PA markers. This system co-registers a Da Vinci surgical robot’s endoscope camera with a transrectal ultrasound (TRUS) transducer [as shown in Fig. 2(f)]. It enables functional guidance within the surgical region-of-interest by tracking the pulsed-laser-diode-illuminated laser spot on the surgical instrument.

Combined with PAI to distinguish between tumor and normal tissue, Shi et al.24 proposed a 532/266 nm dual-wavelength PAM imaging system that can simultaneously perform in vivo analysis of peritumoral vasculature and ex vivo surgical margin pathology of tumors. The system has the potential to guide the process of tumor resection, improve the efficiency of complete tumor resection in a single surgery, and reduce the recurrence rate. Verkhusha et al.25 presented a transgenic mouse model with a knocked-in BphP1 soluble bacterial near infrared photoreceptor. The mouse model enables both spatiotemporal optogenetic regulation and PAI in deep tissues using the same genetically integrated BphP1 construct. The study validates the optogenetic performance of endogenous BphP1 and demonstrates PAI’s capability of BphP1 expression in different organs, developing embryos, virus-infected tissues, and regenerating livers. Xia et al.26 developed a compact, high-speed PA endomicroscopy probe capable of real-time visualization of tissue’s functional, molecular, and microstructural features. Integrated into a medical needle cannula, this probe shows promise in guiding minimally invasive procedures such as tumor biopsies. They also proposed a deep learning framework based on U-Net to improve the visibility of clinical metallic needles with an LED-based PA and ultrasound imaging system.27 The application of PA technology in surgical contexts is both imperative and holds significant potential for future developments.

3.

PAI Plus Advanced Electrical and Mechanical Hardware

Achieving high-quality PA images necessitates capturing PA signals effectively, which involves various key hardware components, such as analog and digital circuits for signal amplification, denoising, and acquisition. Additionally, the ultrasound probe’s positioning is crucial for accurate image registration, and the integration of advanced mechanical scanners can significantly expedite the imaging process. In this section, we delve into the current research landscape concerning PAI plus advanced hardware, encompassing analog/digital circuits, accurate positioning systems, and innovative scanning mechanisms.

3.1.

Analog and Digital Circuits Enabled Low-Cost PAI

PA signals’ capture mainly includes acoustic sensing, amplification, and DAQ. To achieve higher detection sensitivity with miniaturized size, Zheng et al. presented two cutting-edge approaches in coherent PA sensing technology. The first work28 involves a silicon-based sensing system-on-chip (SoC) designed for precise in vivo sensing and imaging, particularly for deep vessel and blood temperature assessments. This SoC, made using TSMC 65-nm CMOS technology, holds promise for healthcare monitoring and early disease detection. The second work29 introduced a quadrature adaptive coherent lock-in (QuACL) chip, a compact chip-based PA sensor utilizing adaptive coherent lock-in techniques for accurate PA signal detection in challenging conditions [as shown in Fig. 3(a)]. Although its implementation requires two analog-digital conversion boards, it has the potential to be integrated into wearable healthcare devices in the future.

Fig. 3

PAI plus electrical and mechanical hardware. (a) System diagram of QuACL chip-based PA detection.29 (b) The MAP PAM imaging result by a peak holding circuit.30 (c) The setup of the 3D fsPAT imaging system and human blood vessel 3D imaging result.31 (d) Schematic of the multiscale PAM with polygon-scanning method and its OR-PAM imaging result.32 (e) Working principle of the TBS mentioned in Ref. 33 and the PAM imaging result by the scanner; the fast-axis scanning rate is up to 400 Hz.

JBO_29_S1_S11513_f003.png

There have also been many attempts to reduce the system cost. Ji et al.30 proposed a low-cost and compact PA maximum amplitude projection (MAP) microscopy system based on a custom-made peak holding (PKH) circuit, which allows for ultra-low data sampling. The system has the same imaging ability as conventional PAM systems. It provides a new paradigm for PAM and offers a cost-effective solution for optimal PA sensing and imaging devices; Fig. 3(b) shows the imaging result by the PKH-circuit based PAM. In pursuit of high-fidelity PA images, capturing an increased number of PA signals escalates hardware costs. To mitigate DAQ channel consumption, Jiang et al.3437 introduced various time and frequency division multiplexing approaches. Examples include the multi-channel delay line module and the low-cost PAT system based on frequency division multiplexing. These methods effectively curtail the DAQ system’s cost.

To reduce the cost of computation of PA image reconstruction, Shen et al.38 introduced a faster model-based image reconstruction method based on superposed wave (s-wave). The proposed method demonstrates substantial time savings, particularly in sparse 3D configurations, where it is over 2000 times faster. To realize fast and cost-effective image reconstruction of PAI, Gao et al.39 proposed a palm-size and affordable PAT system39 with hardware acceleration. The system employs a field-programmable gate array (FPGA) implementation for high-quality image reconstruction in a low-cost, low-power FPGA platform that is adaptable to various image reconstruction algorithms, which accelerates the reconstruction speed at a much lower system cost.

3.2.

Positioning System Enhanced PAI

The reconstructed PA image’s quality relies on the accurate positioning of the ultrasound probe. To ensure image alignment and accurately register the images with the imaging target, various techniques have been employed. Notably, for handheld PAI systems, approaches such as optical camera-based positioning and electromagnetic field-based global positioning system (GPS) have been employed to tackle this challenge. Liao et al.40 introduced a new PAI system called ViCPAI that combines a visible CCD camera with an ultrasound transducer for precise positioning and imaging in preclinical and clinical studies. The system accurately locates target areas and achieves reproducible positioning, allowing for real-time capturing of cerebral hemodynamic changes during various experiments, such as forelimb stimulation and stroke induction. It also enables the monitoring of cortical spreading depression and the progression of peri-infarct depolarization after stroke. The ViCPAI system overcomes the limitations of existing imaging systems by providing precise positioning capabilities and an intuitive user interface.

Jiang et al.31,41 proposed a handheld free-scan 3D PAT system (fsPAT) for clinical applications. Using a linear-array ultrasound probe coordinated via electromagnetic field-based GPS systems, it achieves real-time 2D imaging and large field-of-view 3D volumetric imaging [as shown in Fig. 3(c)]. A specialized space transformation method and reconstruction algorithm could enhance the 3D image quality. In vivo human wrist vessel imaging demonstrates the clinical potential of fsPAT as it revealing detailed subcutaneous vessels with high image contrast. For the PAM system, Wang et al.42 introduced FS-PAM, a handheld probe overcoming traditional OR-PAM limitations such as limited field of view, bulky probes, and slow speed. FS-PAM uses a hybrid resonant-galvo scanner for high-speed dual-axis scanning, offering high-resolution, motion artifact-reduced, label-free hemodynamic and functional imaging. Real-time imaging and simultaneous localization and mapping mode are possible due to its high scanning speed. FS-PAM’s success is exemplified in imaging mouse organs, human oral mucosa, localizing brain lesions, and stroke models.

To precisely align imaging results with specific imaging targets, such as blood vessels, Yang et al.43 introduced a method that combines PAT and optical projection for noninvasive high-resolution imaging of deep blood vessels in the human body. By aligning PA data with real patient anatomy, this technique enables three-dimensional visualization of blood vessels from the body surface. The system has guided micro plastic injection and reveals submillimeter forehead blood vessels, showing potential for aesthetic medicine.

3.3.

Advanced Scanning Mechanisms

PAI relies on ultrasound transducers to receive PA signals from various positions. Notably, in PAM systems, the need to raster-scan all imaging pixels often results in time-consuming data acquisition processes. Consequently, employing advanced scanning mechanisms can substantially enhance the imaging speed. Liu et al.32 presented an improved multiscale PAM system that achieves high-speed wide-field imaging using a homemade polygon scanner [as shown in Fig. 3(d) left]. The system overcomes the trade-off between imaging speed and field of view in previous PAM systems and demonstrates increased imaging speed by a factor of 12.35 compared with previous systems; the right subgraph in Fig. 3(d) shows the OR-PAM imaging result by the polygon scanner. Saijo et al.44 introduced a novel and simple distortion correction method for high-speed OR-PAM with a micro electromechanical system scanner. Yao et al.33 introduced a high-speed functional PA microscopy (OR-PAM) system employing a water-immersible two-axis torsion-bending scanner (TBS) [as shown on the left of Fig. 3(e)]. This innovation accelerates traditional OR-PAM imaging by enabling rapid 2D scanning with independent adjustments of scanning speed and range along both axes. With a cross-sectional frame rate of 400 Hz and volumetric imaging speed of 1 Hz across a 1.5×2.5  mm2 field of view, the system effectively captures dynamic information in small animal models in vivo, including hemodynamic changes under pharmaceutical and physiological influences; the right subgraph of Fig. 3(e) shows the imaging result by the PAM with two-axis TBS.

For more dedicated applications of PAM, Xi et al.4547 introduced several interesting PAM platforms with versatile applications. The organ-PAM platform enables high-resolution imaging of multiple vessel systems within organs, revealing insights into pathological conditions. An ultrafast functional PAM system achieves real-time whole-brain imaging of hemodynamics and oxygenation at micro-vessel resolution, showing potential for fundamental brain research.46 Additionally, they present a detachable head-mounted PAM probe for optical-resolution imaging in freely moving mice,47 offering stable imaging of cerebral dynamics. These platforms collectively push the boundaries of PAI, offering a spectrum of capabilities from organ-level visualization to dynamic brain studies.

For multi-wavelength PAM, Ishihara et al.48 proposed a new spectroscopic OR-PAM technique that allows for the acquisition of information on the PA signal intensity and excitation wavelength from a single spatial scan. The technique involves using two broadband optical pulses with and without wavelength-dependent time delays to calculate the excitation wavelength of the sample. The combination of this technique with fast spatial scanning methods can significantly contribute to recent OR-PAM applications. The limited detection view and detection bandwidth in OR-PAM can lead to a nonlinear dependence on optical absorption, especially for weakly absorbing targets. The study by Duan et al.49 proposed potential solutions and confirms the results through numerical simulations and experimental validation on phantoms. The findings suggest that a side detection scheme or a high optical numerical aperture (NA) may mitigate the low detection sensitivity of OR-PAM on weakly absorbing targets.

Xiong et al.50 developed a flexible forward-view PAE probe based on a resonant fiber scanner, which enables noninvasive biopsy in narrow areas of internal organs. The probe integrates a piezoelectric bender, a fiber cantilever, a lens, an ultrasound transducer, and a coupler, and it achieves a lateral resolution of 15.6  μm in a field-of-view of 3  mm diameter. It has potential as a minimally invasive tool for the clinical assessment of gastrointestinal lesions.

Zhao51 proposed a novel design for a continuously-adjustable light-scanning handheld probe for PAI, which can acquire multiple images using different illumination schemes and can be easily held with one hand. The probe consists of three parts: a medical US linear-probe clamp, a light transition unit, and an optic wedge unit for light beam shaping. The design allows for adjusting the illumination schemes according to different samples, which addresses the issue of delivering more photons to deeper tissues without exceeding safety standards or causing overexposure.

4.

PA Plus Advanced Laser Sources

In PAI, the target tissue absorbs light energy under the illumination of a laser source, leading to energy conversion and the generation of PA signals that carry tissue’s molecular information. In this process, the selection of the laser source directly determines the efficiency and quality of signal generation, and it has a significant impact on the overall system cost. Throughout the development of PAI, researchers have continuously explored and improved the selection of laser sources, considering factors such as laser source types, size, costs, repetition rate, and so on, to enhance PAI’s performance and expand its potential applications.

4.1.

Enhancing Resolution

Multiple techniques are presented for overcoming limiting factors in spatial resolution to improve the visualization of fine structures and details. The ultimate resolution achieved in PAI is dependent on both the optical excitation and acoustic detection. Therefore, many research studies have focused on the improvement of resolution by innovating laser sources. Nteroli et al. and Notsuka et al. improved axial resolution through novel laser sources52 and adaptive optics,53 respectively. Nteroli et al. showed that ultrafast picosecond excitation can generate ultrasound waves of a higher frequency for up to 50% improvement in axial resolution. Adaptive optics53 helps compensate aberrations to maintain a tight optical focus and high lateral resolution at depth for OR-PAM. Cao et al. developed a needle-shaped optical excitation beam using diffractive optical elements to extend the depth of field to around a 28-fold Rayleigh length54 [as shown in Fig. 4(a)]. This helps enable high resolution imaging over an extended imaging volume without the need for fine depth scanning and focus adjustment. The corresponding experimental results are shown in Fig. 4(d). It can be seen from the figures that advanced lasers do produce images of high quality.

Fig. 4

Novel laser sources plus PAI. (a) Principle of a DOE combining M foci to form the needle-shaped beam (OR-PAM with a needle-shaped beam). (b) The system schematic of the dual-wavelength LD-based PAM.55 (c) A schematic diagram of dual-compressed PA single-pixel imaging.56 (d) Depth-resolved imaging of carbon fibers with a needle-shaped beam (OR-PAM with a needle-shaped beam).

JBO_29_S1_S11513_f004.png

These advancements push the boundaries of the axial resolution and demonstrate the potential for PAI to achieve diffraction-limited resolution sufficient for resolving subcellular structures, even in deep tissues. It allows for maintaining a tight lateral resolution when imaging uneven surfaces or acquiring volumetric data.

4.2.

Shrinking Cost and Size

It is important for a PAI system to be both competitive in imaging quality and accessible for more patients. Therefore, lower system costs combined with the smaller size of laser sources facilitates the development of portable imaging systems that can be moved close to the patient bedside or surgical suite. Li et al.55 demonstrated a compact dual-LD PAM with a cost reduction of nearly 20 to 40 times that of standard pulsed laser systems typically used for PA excitation [as shown in Fig. 4(b)]. The LDs provide sufficient pulse energy for high resolution OR-PAM. Replacing costly and bulky pulsed lasers with inexpensive compact laser diodes could help expand the adoption of PAI. Similarly, Heumen et al. explored the use of low-cost LEDs as excitation sources for visualizing lymphatic vessels in patients with secondary limb lymphedema.57 Deng et al.58 and Liang et al.59 also showcased portable, low-cost LD and LED-based PA imaging systems tailored for specific clinical applications such as subsurface microvasculature or lymphatic imaging. Deng et al. developed an LD based system with a long working distance of 22 mm using reflective optics.58 The long working distance enables non-contact imaging and overcomes limitations of the short working distances of high NA objectives needed to focus LD beams. Moreover, Song et al.60 developed a multiscale technique that can tune the resolution by controlling the spatial frequency of structured illumination patterns, which avoids slow mechanical scanning.

The low cost and small size of LDs and LEDs makes them well suited for translating PAI to bedside use. Their efficiency and reduced power requirements also enable the development of compact portable imaging systems.

4.3.

Increasing Imaging Speed and Frame Rate

Boosting PAI’s speed enables real-time visualization of dynamic physiological processes across different time scales. Chen et al. achieved video-rate 30 Hz PAI over a 473  μm field of view using non-scanning single pixel detection.61 This represents a nearly two orders of magnitude increase in speed over conventional scanning OR-PAM. Real-time PAI could enable new research and clinical capabilities ranging from imaging blood flow to tracking cancer cell metastases. Such speed improvement is realized by combining single-pixel detection with customized temporally modulated illumination patterns, overcoming limitations imposed by conventional raster scanning. Guo et al. and Song et al. applied computational approaches such as compressed sensing56 [as shown in Fig. 4(c)] and Fourier basis encoding60 to accelerate data acquisition and reconstruction to help overcome speed limitations of conventional scanning and reconstruction methods. Compressed sensing can provide high fidelity reconstruction from sparse sampling by exploiting image sparsity and redundancy. Fourier basis encoding utilizes predictable mathematical patterns to enable reconstruction from limited detection data. These computational innovations could be combined with alternative detection schemes to achieve real-time PAI.

4.4.

Enabling More Clinical Applications

Advanced laser sources are key to expanding the imaging capabilities and applications of PAI, from preclinical animal models to human patients. Lipid detection could provide critical information about disease progression in conditions such as atherosclerosis. Multispectral imaging allows for better differentiation of lipid, hemoglobin, and other absorbers. Ren et al.,62 Mukhangaliyeva et al.,63 and Liang et al.59 showcased handheld, non-contact, and other systems tailored for clinical use. The handheld probe in Ref. 62 enables detecting optical anisotropy for assessing tissues such as nerves and tendons during surgical procedures. Non-contact imaging developed in Refs. 63 and 59 helps prevent contamination and damage to delicate samples. Lee et al. demonstrated a specialized fiber laser providing picosecond pulses at 1192 nm for PAM of lipids in the second near-infrared window.64 Operating in this optical window allows for deeper penetration for lipid imaging. Tachi et al. used a supercontinuum source for chromatic aberration-free multispectral imaging.65 Moreover, Heumen et al.57 demonstrated the promise of translating PAI to patients by visualizing lymphatic vessels in human subjects with LED excitation. As the technology matures gradually, clinical translation from preclinical studies to clinical use for improving patient care and outcomes could accelerate.

In summary, ongoing innovations in laser sources are helping make PAI systems more accessible, faster, higher resolution, and better suited for clinical translation. With its unique ability to provide high resolution optical absorption contrast deep in tissues, PAI is poised to become a valuable new tool for both biomedical research and clinical diagnosis.

5.

PA Plus Advanced Ultrasound Sensors

Typically, a dedicated ultrasound transducer needs to be used for PA signal acquisition, enabling subsequent signal processing and analysis. Due to variations in tissue morphology and composition, the requirements for the ultrasound sensors can differ. Significant differences in tissue size and morphology across different regions necessitate adjustments in the material, size, shape, and arrangement of individual elements within the ultrasound transducer to ensure both detection accuracy and convenience. Recent research studies have highlighted a series of ultrasound sensor innovations, including novel geometries, materials, fabrication methods, and sensing principles.

5.1.

Large-Bandwidth and Multi-Frequency Sensors

Due to the different optical and acoustic characteristics of different types of tissues, the frequency spectrum of PA signals emitted by tissues may exhibit noticeable differences, leading to varying demands on the central frequency and bandwidth of the ultrasound transducer. Several works have pushed the boundaries of the sensor bandwidth and frequency range to overcome the limitations of conventional piezoelectric detectors [as shown in Fig. 5(a)]. Multi-frequency or broadband detectors have been implemented to capture a wider range of PA signal frequencies.66,7173 Broader bandwidths improve imaging resolution, allow for access to higher frequency content, and enable spectroscopic PA analysis. Spectroscopic detection analyzes the acoustic frequency spectrum at each pixel to extract additional information about the optical absorber for enhanced tissue characterization. However, achieving wide, flat sensor bandwidths spanning tens to hundreds of MHz still remains technically challenging.

Fig. 5

Advance ultrasound sensors for PAI. (a) Cross-sectional structure of the broadband transducer and a photograph of the transducer reported in Ref. 66. (b) Design and structure of the sensitive ultrawideband transparent ultrasound detector in Ref. 67. (c) Photographs of the fabricated flexible transparent CMUT arrays and bond with flexible PCB.68 (d) Schematic showing the cross-section and ultrasound detection mechanism of a surface-micromachined optical ultrasound transducer element reported in Ref. 69. (e) Schematic diagram and photograph of the fiber optic ultrasound sensor.70 (f) Sequential in vivo OR-OAM mouse ear edge images and corresponding MIP representation with the design in Ref. 67.

JBO_29_S1_S11513_f005.png

5.2.

Transparent and Flexible Sensors

Fabricating sensors on transparent substrates is another important theme.7477 Transparent sensors enable co-axial or trans-illumination optical delivery, which is critical for some implementations such as handheld probes. They also facilitate multimodal imaging, allowing for flexible ultrasound detector integration with other optical modalities. This expands the possibilities for multimodal imaging, allowing for flexible ultrasound detector integration within optical microscopy setups.

However, the fabrication of transparent ultrasound sensors with adequate sensitivity and bandwidth has been a persistent challenge. Materials and designs to improve transparent sensor bandwidth, sensitivity, and acoustic matching have been investigated. Osman et al.75 explored using dispersed glass microbeads in epoxy to create acoustic matching layers. Meanwhile, Chen et al.74 and Peng et al.76 applied various piezoelectric materials such as lithium niobate for making transparent ultrasound sensors and transducers. Realizing transparent detectors with adequate acoustic characteristics would allow for flexible integration into multi-modal imaging systems and open up novel implementation methods.

5.3.

Novel Sensor Geometries and Arrangements

Beyond fundamental materials and fabrication advances, researchers have also innovated sensor geometries and arrangements to enhance PAI. Zhang et al.78 and Ma et al.79 exploited non-spherical elliptical and needle-shaped focusing to achieve extended depth of field. Dense sensor arrays and transparent arrays68 [as shown in Fig. 5(c)] with up to thousands of elements have also been developed, enabling real-time image acquisition without mechanical scanning.68,69,73,80,81 Fu et al.82 presented a hockey stick shaped sensor tailored for intraoral imaging of posterior teeth, which are not easily accessed by conventional linear transducer geometries. These works demonstrate how purposefully engineering the sensor geometry can improve imaging performance and expand the potential applications. Geometry innovations address specific use cases and limitations, demonstrating how customizable, application-specific sensor designs can maximize the imaging performance.

5.4.

Miniaturized Sensors

Another major advancement is the development of miniaturized sensors that facilitate integration into compact imaging systems and enable new minimally invasive applications [as shown in Fig. 5(e)]. Several studies have presented miniaturized optical fiber, silicon, and PMUT based sensors with dimensions <1  mm, suitable for catheter or endoscopic imaging.70,8385 Such miniaturized sensors overcome limitations of bulky piezoelectric detectors and open new possibilities for invasive imaging. These sensors can be placed at the tip of needles and catheters to provide high resolution in vivo imaging during guided interventions or implemented in thin endoscopic probes for assessing internal hollow organs.

However, scaling down the sensor size can compromise the sensitivity and bandwidth if not designed carefully. Therefore, researchers have also explored novel materials and fabrication techniques to maintain adequate acoustic performance in miniaturized footprints. Ustun et al.83 utilized high frequency silicon-based acoustic delay lines to achieve a 20 MHz bandwidth for its sub-mm fiber optic sensor. Wang et al.84 and Cai et al.85 exploited the favorable scaling of PMUT technology to enable micromachined arrays for endoscopic imaging. Ongoing efforts to optimize miniaturized sensor designs while preserving the sensitivity, bandwidth, and other acoustic characteristics will enable translation to clinically valuable minimally invasive imaging applications.

Fig. 6

Brief comparison of end-to-end neural network methods (a) and GAN methods (b). End-to-end methods rely on a well-designed neural network to learn the forward mapping defined by the paired dataset, whereas GAN needs a generator and a discriminator to perform adversarial learning.

JBO_29_S1_S11513_f006.png

5.5.

Non-Contact Light-Based Acoustic Sensor

Several works have also showcased optical detection schemes that offer advantages over conventional piezoelectric sensors. Although optical sensing modalities can have trade-offs such as anti-interference capability and more complex nanofabrication, which may hinder its broader adoption, improvements have been made to improve the optical detector’s performance to make them more viable alternatives. Optical interferometry, surface plasmon resonance, and related principles have been utilized to achieve acoustic sensing.67,72,79,8695 Compared with piezoelectric sensors, optical detection provides higher sensitivity, larger bandwidths extending to hundreds of MHz, smaller footprints without electrical connections, and easier integration with optical excitation sources. Optical sensing is also free of acoustic impedance mismatches, making the signal detection more convenient and efficient.

The growing number of in vivo imaging demonstrations reveal the expanding practical applications as sensor technology matures. Research studies in Refs. 84, 85, and 96 have presented endoscopic and other minimally invasive sensor configurations tailored for specific clinical use cases and imaging needs. Meanwhile, researchers of Refs. 67, 70, 79, 94, 95 directly showcased in vivo imaging enabled by optical or miniaturized sensors, including a brain imaging demonstration. The translation to preclinical and clinical imaging is critical to validate the sensors and make a real-world impact.

6.

PA Plus Deep Learning

6.1.

Image Reconstruction and Enhancement

A major focus of deep learning in PAI has been improving image reconstruction, especially under challenging conditions such as limited-view geometries in which the acquired data are sparse. Methods such as DL-PAT97 and DuDoUNet98 use convolutional neural networks to reduce artifacts and improve image quality from limited-view data, and Wang et al. proposed a learned regularization approach.

DL-PAT97 implements a conditional generative adversarial network (GAN) to enhance 3D dynamic volumetric PA computed tomography. By training on a subset of the transducer elements from conventional systems, DL-PAT can reconstruct high-quality images comparable to full-view methods while using fewer elements. This reduces the cost and data size of PAI systems. Quantitative studies have shown that DL-PAT reduces artifacts and improves signal-to-noise ratio (SNR). Seong et al.99 also introduced deep learning techniques into 3D PA imaging. DuDoUNet98 is specifically designed for limited view PAT. It uses a U-Net architecture that takes both the time-domain and frequency-domain representations of the limited view data as input. This provides complementary information to distinguish artifacts from true signals. An information sharing block fuses and compares the dual-domain inputs. Experiments on a clinical database showed DuDoUNet reconstructed images with 93.56% structural similarity and 20.89 dB peak SNR, outperforming conventional limited view methods.

Other works have applied deep learning for computational acceleration. For example, the proposed method in Ref. 100 achieves faster convergence by learning regularization features. It uses a CNN within a model-based gradient descent reconstruction to learn regularization parameters automatically instead of with manual adjustment. AS-Net101 fuses multi-feature information to enable faster reconstruction from sparse data. It was demonstrated to provide superior image quality from limited data compared with conventional model-based reconstruction.

Beyond image reconstruction, deep learning has been used for resolution enhancement. Works such as AR to OR domain transfer learning (AODTL)-GAN102 train GANs to transform acoustic-resolution images to optical-resolution quality. AODTL-GAN uses a two-stage GAN approach. First, a generative model is trained on simulated acoustic and optical resolution image pairs. Then, a second network adapts the output to match real optical resolution images through domain transfer learning. Quantitative metrics such as peak SNR and structural similarity index were also significantly increased. Cheng et al. employed a GAN to enhance the imaging lateral resolution of acoustic-resolution PAM (AR-PAM) images, transforming them to achieve OR-PAM quality.103 This GAN enables deep-tissue imaging and demonstrating potential applications in biomedicine.

Others such as Deep-E104 focus on enhancing elevation resolution in linear-array based systems, which is inherently limited by the transducer geometry. Based on U-Net, Deep-E enhances the elevational resolution of linear-array-based PAT by training on 2D slices in axial and elevational planes, leading to an improved resolution by at least four times and potential high-speed, high-resolution image enhancement applications. Kim et al.105 introduced a computational strategy utilizing deep neural networks (DNNs) for enhancing both temporal and spatial resolutions in localization PAI, which is illustrated in Fig. 7(a). The proposed DNN-based method reconstructs high-density supper resolution images from a reduced number of raw frames. This approach is applicable to both 3D label-free localization OR-PAM and 2D labeled localization PAT. Dehner et al.106 proposed DL-MSOT to apply deep learning denoising to improve optoacoustic contrast in deep tissues. The study introduces a deep learning approach for noise removal before image reconstruction. This algorithm learns spatiotemporal noise-signal correlations using entire optoacoustic sinograms and is trained on real noise and synthetic optoacoustic data. Evaluations showed that it achieved substantially higher vessel contrast at depths over 2 cm in vivo. Other super-resolution methods include works by Ma et al.107 and He et al.108

Fig. 7

Demonstration of two deep learning methods in PAI. (a) 3D U-Net proposed by Jongbeom Kim et al. for reconstruction of high-density superresolution images from fewer raw frames14 and (b) the structure of the CycleGAN proposed by Rui Cao et al., where the CycleGAN is composed of two symmetric generators and corresponding adversarial discriminators.35

JBO_29_S1_S11513_f007.png

High fidelity deconvolution methods, such as using RRDBNet,109 also leverage deep learning for resolution improvement. RRDBNet is a deep residual network tailored for image deconvolution. By training on simulated vessels, it showed accurate recovery of features ranging from 30 to 120  μm. It also outperformed conventional methods such as Richardson–Lucy and model-based deconvolution in recovering multiscale features in phantom and in vivo images. Zhang et al.110 discussed the application of PAI for monitoring cancer therapy. By tracking changes in vasculature and oxygenation, the technique provides valuable insights into treatment efficacy. The study demonstrates the potential of PAI as a non-invasive tool for assessing therapeutic response and guiding cancer treatments.

In addition, deep learning methods have been mixed with more physics aspects of PAI, such as fluence compensation111 and beamforming corrections.112 Deep learning has also been applied in specialized modalities such as improving Bessel-beam performance113 and processing endoscopic images.114

Madasamy et al.111 used fully dense U-Net based deconvolution for optical fluence compensation in 3D optoacoustic tomography. Training on heterogeneous breast phantoms showed that the method highlighted deep structures with higher contrast compared with reconstruction without fluence correction. The method proposed by Jeon et al.112 applies deep learning for mitigation of speed-of-sound (SoS) aberrations in vivo. This method simultaneously mitigates SoS aberrations, removes streak artifacts, and enhances temporal resolution in both structural and functional in vivo PA images of healthy human limbs and melanoma patients.

For emerging modalities, Zhou et al.113 combined Bessel beam excitation with deep learning to enhance the quantitative performance in multi-parametric PAM using a conditional GAN. It enables simultaneous high-resolution quantification of hemoglobin metrics and cerebral blood flow in live mouse brains (Fig. 6).

6.2.

Motion Correction and Denoising

PAI can be affected by motion artifacts and noise. Deep learning approaches have shown promise in addressing these challenges. For example, motion artifact correction-Net115 corrects motion artifacts in intravascular PA data by learning correlations from simulated training data. It uses a convolutional network to correct motion frame-by-frame while preserving structures. Evaluations showed that it achieved motion suppression comparable to gating but without discarding frames.

Other works such as PA-GAN116 and the method in Ref. 108 train GANs to reduce noise and improve SNR. PA-GAN116 uses an unpaired training approach that does not require matched noisy and clean image pairs. This provides greater flexibility than supervised methods. Experiments showed that PA-GAN achieved a higher peak SNR and structural similarity than U-Net, especially for sparse-view cases. The method in Ref. 108 trains a GAN to emulate the effects of both temporal averaging and singular value decomposition denoising. It effectively enhances SNR in RF data and corresponding PA reconstructions, leading to reduced scan time and laser dose.

6.3.

Analysis and Quantification

Deep learning has also enabled accurate analysis of PA data. Semantic segmentation algorithms can extract interpretable information from multispectral PA images.117 This study introduces a deep learning-based method for semantic segmentation of multispectral PA images, using manually annotated data to train a supervised algorithm.

For quantification, methods such as quantitative optoacoustic tomography (QOAT)-Net118 estimate optical absorption coefficients by training on simulated data-label pairs. QOAT-Net uses a dual-path network design that learns correlations between the imaging data and absorption maps to perform fluence compensation. QOAT-Net is demonstrated to produce high-resolution quantitative absorption images in simulations, phantoms, ex vivo, and in vivo tissues. This innovation facilitates DL-based QOAT and similar imaging applications even when ground truth data are unavailable.

US-UNet119 learns from clinical ultrasound and PA features for diagnosis. Using ultrasound morphology features from a pretrained CNN along with PA data, it achieved an area under the ROC curve of 0.94 and accuracy of 0.89 in differentiating ovarian lesions in patient studies. This demonstrates that deep learning can leverage multimodal data for enhanced quantification. Zhao et al.120 proposed a deep learning-based technique for OR-PAM to effectively image and analyze 3D microvasculature datasets. This method overcomes limitations in depth of focus and SNR, showcasing successful segmentation of endogenous and exogenous multi-organ data. Notably, it achieves comprehensive exogenous 3D imaging of mouse brain vasculature at various depths, highlighting its potential for microcirculation imaging in clinical applications.

6.4.

Other Applications

Deep learning has opened possibilities for new PAI capabilities. As examples, Schellenberg et al. proposed a GAN to synthesize realistic tissue images for quantitative PAT.121 It uses GANs trained on annotated medical images to generate virtual tissue structures. Adding simulated optical and acoustic properties then yields realistic training data. The method is validated against a traditional model-based approach, demonstrating more realistic synthetic images.

Deep learning algorithms have also enabled new modalities such as Deep-PAM122 for rapid label-free histological imaging. Deep-PAM combines ultraviolet PAM (UV-PAM) with deep learning to enable rapid and label-free histological imaging. This provides rapid label-free assessment of specimens without physical staining or tissue processing. In addition, Cao et al.123 introduced a real-time, label-free method for intraoperative evaluation of thick bone specimens using UV-PAM in reflection mode, as is shown in Fig. 7(b). This technique eliminates the need for tissue sectioning and provides detailed three-dimensional contour scans of bone tissue.

7.

PA Plus Other Imaging Modality

7.1.

Enabling Multimodal Medical Imaging

Breast imaging is one of the clinical applications of PA imaging, and there have been numerous studies on breast PA imaging,124 including the development of PA-US dual-modal systems. Zheng et al. incorporated ultrasound elastography as a modality in addition to PA and US imaging to assess the mechanical properties of the breast.125,126 The system employed motor control and a 3D-printed transducer-fiber combiner to reduce registration errors, enabling a 3D scan of the breast to be completed in 40  s.

Brain imaging is another emerging application of PA imaging, holding great promise for studying brain functionality.127129 Na et al.130 combined PA imaging with functional ultrasound imaging in the CRUST-PAT system, which employed cross-line ultrasound tomography to provide all-directional sensitivity to blood flow, enabling simultaneous monitoring of cerebral blood flow and oxygenation. The spatial resolution achieved was around 170 micrometers. The imaging results are shown in Figs. 9(a)9(f). Ni et al.131 successfully demonstrated the imaging of the superior middle cerebral vein in the temporal cortex of a healthy volunteer using multi-spectral optoacoustic tomography (MSOT) and time-of-flight (TOF) magnetic resonance angiography. These initial results show the potential of MSOT in clinical brain imaging. However, the human skull induces strong acoustic aberrations, resulting in significant distortion of deep vascular structures. In these studies, the presence of the skull had an impact on the signals, leading to a reduction in image resolution. Therefore, PA transcranial imaging remains challenging.

In addition to combining with ultrasound imaging, PAI has also been integrated with other optical imaging modalities, such as optical coherence tomography (OCT) and fluorescence imaging. In a study by Brendon et al.,133 an innovative all-optical microscopy system was introduced; it combined PA remote sensing (PARS), fluorescence microscopy, and confocal laser scanning microscopy. The system overcame the limitation of physical tissue contact required by ultrasound probes. The research successfully demonstrated complementary absorption and fluorescence contrast in cellular and tissue imaging, achieving subcellular resolution. Tianrui et al.134 presented the development of a dual-mode PA and fluorescence microscopy probe for high-speed imaging using a multimode fiber [as shown in Figs. 8(a)8(d)]. By utilizing a high-speed digital micromirror device with an optimal pattern, the probe performed raster scanning imaging of a focused beam with a diameter of 1.5  μm at the distal end of the optical fiber port. The probe achieved imaging speeds ranging from 2 to 57 frames per second, with a spatial resolution ranging from 1.7 to 3  μm. In another study by Van et al.135 [as shown in Figs. 8(e)8(i)], multimodal imaging techniques combining PAI, OCT, fluorescence microscopy, and FDA-approved indocyanine green (ICG) were utilized for cellular therapy in ocular diseases. This research involved the transplantation of ICG-labeled human retinal pigment epithelial cells (ARPE-19) into the subretinal space of rabbits, followed by long-term monitoring. The results demonstrated a significant improvement of 37 times in fluorescence imaging signals and 20 times in PA signals after cell transplantation. The signals could be clearly identified and utilized to determine migration locations, survival rates, and cell layer thickness.

Fig. 8

Left: imaging results of carbon fibers and fluorescent microspheres using PARS and fluorescence microscopy by Tianrui et al.134 Right: imaging results of ICG-labeled ARPE-19 cells transplanted into live rabbits using multimodal imaging techniques combining PAI, OCT, and fluorescence microscopy on day one by Van et al.135 (a) Optical microscopy imaging result. (b) PAM imaging result. (c) Fluorescence imaging result. (d) Dual-modal imaging result. (e) Fundus color photography. (f) Fluorescence imaging. (g) PAM imaging result at 578 nm (red) and 700 nm (blue) wavelengths. (h) OCT imaging result, with white arrows indicating the transplanted cells. (i) Three-dimensional OCT reconstruction results, with color representing different depths of the retina.

JBO_29_S1_S11513_f008.png

Numerous emerging applications of PA imaging have also been developed.136 Chen et al.137 applied a PA-US dual-modal imaging system for assessing the activity of scar tissue. The current methods for evaluating scars primarily rely on subjective assessments by physicians. This team utilized PA imaging, ultrasound imaging, elastography, and super micro vascular imaging to evaluate scar tumors and performed standardized quantitative assessments. Clinical experiments also demonstrated the feasibility of this evaluation model. Wang et al.138 used a PAUS dual-modal imaging system for diagnosing rheumatoid arthritis, a condition characterized by neovascularization, synovial hyperplasia, and cartilage damage. The team employed PA imaging to detect blood vessel formation and employed US to assess synovial erosion, correlating them with the severity of arthritis for quantitative analysis.

7.2.

Exciting PA Signal for Ultrasound Imaging

Combining PA and US techniques can overcome some limitations of traditional ultrasound imaging while utilizing PA to assist in ultrasound generation. Typically, a fully optical US probe uses two separate fibers for ultrasound generation and reception respectively. However, Chen et al.139 incorporated PA-based US generation and Fabry–Perot interference-based ultrasound detection structures at the end of a double-clad optical fiber. This innovative approach allowed them to maintain a probe size of just 1 mm in diameter. Experimental results demonstrated that the probe was capable of producing ultrasound signals with an amplitude of 2.36 MPa, a central frequency of 10.64 MHz, and a 6-dB bandwidth of 22.93 MHz in transmitting mode. The researchers also successfully captured forward-viewing pulse-echo signals that varied with transmission distance for the first time.

In another study, Liu et al.140 developed a fiber-optic ultrasound pulse transmitter based on continuous-wave (CW) laser-triggered thermo-cavitation. By heating a highly absorptive copper nitrate solution using CW laser light, they generated explosive bubbles and emitted strong ultrasound waves. Operating at a wavelength of 980 nm and with an optical heating power of 50 mW, they achieved omnidirectional ultrasound pulses with an intensity of 0.3 MPa and a repetition rate of 5 kHz within the frequency range of 5 to 12 MHz. They used this ultrasound transmitter to construct an all-fiber ultrasound endoscopic probe, eliminating the need for expensive high-energy pulsed lasers and optically absorptive composite films.

7.3.

Improving Algorithm Design

The presence of US can assist in optimizing the reconstruction algorithm in PA imaging.141 In PA imaging of practical scenarios, the presence of heterogeneous media could cause the artifacts and lead to decreased image quality. One approach is to use physics-based iterative optimization algorithms, which is, however, time consuming.142 By combining US imaging, which provides structural information, it becomes possible to provide prior information about the sound velocity field during PA image reconstruction.143,144

Zhang et al.145 employed TOF for automatic segmentation of sample boundaries and automatically searched for the optimal sound speed, demonstrating good robustness. However, their search process was relatively time consuming, and the image reconstruction took approximately 14 min. They further132 developed a real-time, 10 Hz, dual-modal system combining ultrasound and PA imaging. The system leveraged ultrasound imaging to automatically determine the optimal sound speed and employed selective parallel image reconstruction techniques to enhance the imaging speed. The imaging results of human fingers are shown in Figs. 9(g)9(j).

Fig. 9

(a)–(f) the results of transcranial brain imaging by Na et al.131 (g)–(j) demonstrates the imaging result of human finger joints with VF-USPACT by Zhang et al.132 (a) The regions imaged in transcranial brain imaging of mice. (b) Power Doppler images (PDI) of the mouse brain. (c) Velocity amplitude map of the cerebral blood flow (CBF) in the major cortical vessels. (d) Flow vectors in region 2 of the velocity map. (e) PAT-measured functional responses, which show the contralateral functional responses to the hindlimb electrical stimulation. (f) Fractional changes of PD, hemoglobin concentrations, and sO2 signal in response to stimulation. (g) Illustration of the finger joint imaging locations. (h) PA and US image reconstructed with optimal SoS at middle finger cross section. The high PA signals from blood vessels corresponding to anechoic regions in the US images. (i) PA and US image of the thumb cross section. (j) The comparison of the image reconstructed with optimized SoS (1496  m/s) and the one from the little finger (1500  m/s).

JBO_29_S1_S11513_f009.png

To address the presence of bone, Zhao et al.143 utilized US imaging to identify the acoustically heterogeneous regions within the measurement area and segmented those regions. Subsequently, they applied a variable truncation time to truncate the PA signals in the time domain, effectively suppressing acoustic artifacts. However, this method is specifically designed for imaging outside the high-speed regions and may not be suitable for scenarios such as transcranial imaging, which requires the penetration through high-density areas.

7.4.

Multimodal Endoscopic Imaging

PAE, as a clinical application of PAI, has been under development for a significant period. However, there are still great obstacles to overcome before its clinical implementation can be achieved. To address this concern, the following studies underscore the significant progress made to promote the progress of PAE toward clinical application.134,146149

A major difficulty in multimodal PAE lies in achieving efficient light coupling. Wen et al.150 addressed this issue by introducing a disposable PAUS endoscopic catheter prototype and its corresponding power interface unit. The catheter exhibited switchability, self-internal 3D scanning, and system repeatability for gastrointestinal endoscopy. By utilizing high-fluence relays, they minimized the cascade insertion losses of the optical waveguide to 0.6 dB while maintaining high-power impedance performance. They also designed a customized focus-adjustable acousto-optic coaxial probe for high-sensitivity optical-resolution PA imaging. Their experiments demonstrated real-time microscopic visualization of microvasculature and stratification in the rat colon, with a lateral resolution of 18  μm and an axial resolution of 63  μm. The rigid part of the catheter had a length of 13 mm, showcasing significant potential for clinical gastrointestinal disease detection.

In another study, Zhu et al.151 addressed the environmental requirements of endoscopy by integrating a miniaturized ultrasound array and an angle-tipped optical fiber into a hydrostatic balloon catheter. The flexible surface of the hydrostatic balloon enabled acoustic coupling on the uneven surfaces of the gas-filled intestine. This integration of endoscopic PA imaging technology allowed for the evaluation of colitis and fibrosis feasibility. When the balloon was collapsed, the catheter probe could potentially be compatible with clinical ileo-colonoscopy. With an imaging penetration depth of 12 mm, they validated the probe’s potential in differentiating normal, acute, and chronic conditions in intestinal obstruction using an in vivo rabbit model.

8.

Outlook and Conclusion

In this review, we summarized several typical topics of PAI plus X innovations. More generally, the PAI plus X strategy can be classified as two categories: PAI empowered by X and PAI empowering X. Simply speaking, PAI empowered by X means PAI is made better by X. On the other hand, PAI empowering X means PAI makes X better. As shown in Fig. 10, we summarize the outlook and challenges of PAI plus X in a wider scope, in such two categories.

Fig. 10

Outlook of PAI plus X.

JBO_29_S1_S11513_f010.png

For PAI empowered by X, it can be further classified as new hardware and new algorithms. Within new hardware, PAI can be empowered by new laser sources, acoustic sensors, mechanical components, and electrical circuits. Within new algorithm, PAI can be empowered by various algorithm designs. Each of these hardware designs and algorithms improves the overall performance of the PAI system; meanwhile, there is still room for further improvement in many aspects (rightmost column in Fig. 10). For example, for the learning-based PAI reconstruction algorithm, the large clinical dataset is still a bottleneck, impeding the deep learning algorithms for clinical applications. Similarly, for PAI empowering X, it can be further classified as treatment guidance, ultrasound generation and multimodal imaging. Within treatment guidance, PAI can guide various treatment methods, such as photothermal/HIFU/RF therapy, as well as surgical robot. Within ultrasound generation, PA generated ultrasound signal can be used for neuro-stimulation and ultrasound imaging. Within multimodal imaging, PAI can provide functional and molecular information, complementing the anatomical imaging of conventional imaging modality, such as B-mode ultrasound and CT. Some key parameters and advantages are listed in Fig. 10.

Disclosures

No conflicts of interest, financial or otherwise, are declared by the authors.

Code and Data Availability

Data sharing is not applicable to this article, as no new data were created or analyzed.

Acknowledgments

This research was funded by National Natural Science Foundation of China (Grant No. 61805139), Shanghai Clinical Research and Trial Center (Grant No. 2022A0305-418-02), and Double First-Class Initiative Fund of Shanghai Tech University (Grant No. 2022X0203-904-04). There are no relevant financial interests in this manuscript.

References

1. 

L. V. Wang, “Tutorial on photoacoustic microscopy and computed tomography,” IEEE J. Sel. Top. Quantum Electron., 14 (1), 171 –179 https://doi.org/10.1109/JSTQE.2007.913398 IJSQEN 1077-260X (2008). Google Scholar

2. 

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus, 1 (4), 602 –631 https://doi.org/10.1098/rsfs.2011.0028 (2011). Google Scholar

3. 

L. V. Wang and S. Hu, “Photoacoustic tomography: in vivo imaging from organelles to organs,” Science, 335 (6075), 1458 –1462 https://doi.org/10.1126/science.1216210 SCIEAS 0036-8075 (2012). Google Scholar

4. 

M. Basij et al., “Development of an integrated photoacoustic-guided laser ablation intracardiac theranostic system,” in IEEE Int. Ultrasonics Symp. (IUS), 1 –4 (2021). https://doi.org/10.1109/IUS52206.2021.9593442 Google Scholar

5. 

S. Gao et al., “Photoacoustic necrotic region mapping for radiofrequency ablation guidance,” in IEEE Int. Ultrasonics Symp. (IUS), 1 –4 (2021). https://doi.org/10.1109/IUS52206.2021.9593388 Google Scholar

6. 

C. J. Chua et al., “Feasibility of photoacoustic-guided ultrasound treatment for port wine stains,” Lasers Surg Med., 55 (1), 46 –60 https://doi.org/10.1002/lsm.23609 (2023). Google Scholar

7. 

Y. Zhang and L. Wang, “Array-based high-intensity focused ultrasound therapy system integrated with real-time ultrasound and photoacoustic imaging,” Biomed. Opt. Express, 14 (3), 1137 –1145 https://doi.org/10.1364/BOE.484986 BOEICL 2156-7085 (2023). Google Scholar

8. 

H. Zhu et al., “Miniaturized acoustic focus tunable photoacoustic transmitter probe,” Sens. Actuators, A, 332 113211 https://doi.org/10.1016/j.sna.2021.113211 (2021). Google Scholar

9. 

H. Song et al., “Real-time intraoperative surgical guidance system in the da Vinci surgical robot based on transrectal ultrasound/photoacoustic imaging with photoacoustic markers: an ex vivo demonstration,” IEEE Rob. Autom. Lett., 8 (3), 1287 –1294 https://doi.org/10.1109/LRA.2022.3191788 (2023). Google Scholar

10. 

Y. Ma et al., “Multi-wavelength photoacoustic temperature feedback based photothermal therapy method and system,” Pharmaceutics, 15 (2), 555 https://doi.org/10.3390/pharmaceutics15020555 (2023). Google Scholar

11. 

Y. Ma et al., “Mild-temperature photothermal treatment method and system based on photoacoustic temperature measurement and control,” Biomed. Signal Process. Control, 79 104056 https://doi.org/10.1016/j.bspc.2022.104056 (2023). Google Scholar

12. 

A. Capart et al., “Multiphysical numerical study of photothermal therapy of glioblastoma with photoacoustic temperature monitoring in a mouse head,” Biomed. Opt. Express, 13 (3), 1202 –1223 https://doi.org/10.1364/BOE.444193 BOEICL 2156-7085 (2022). Google Scholar

13. 

Z. Wang et al., “Photoacoustic-guided photothermal therapy by mapping of tumor microvasculature and nanoparticle,” Nanophotonics, 10 (12), 3359 –3368 https://doi.org/10.1515/nanoph-2021-0198 (2021). Google Scholar

14. 

J. Huang et al., “Empirical assessment of laser safety for photoacoustic-guided liver surgeries,” Biomed. Opt. Express, 12 (3), 1205 –1216 https://doi.org/10.1364/BOE.415054 BOEICL 2156-7085 (2021). Google Scholar

15. 

Q. Li et al., “Dynamic acoustic focusing in photoacoustic transmitter,” Photoacoustics, 21 100224 https://doi.org/10.1016/j.pacs.2020.100224 (2021). Google Scholar

16. 

Y. Chen et al., “Fully planar laser-generated focused ultrasound transmitter,” Sens. Actuators, A, 349 113929 https://doi.org/10.1016/j.sna.2022.113929 (2023). Google Scholar

17. 

Y. Chen et al., “Binary amplitude switch for photoacoustic transducer toward dynamic spatial acoustic field modulation,” Opt. Lett., 47 (4), 738 –741 https://doi.org/10.1364/OL.446714 OPLEDP 0146-9592 (2022). Google Scholar

18. 

D. Vella et al., “Ultrasonic photoacoustic emitter of graphene-nanocomposites film on a flexible substrate,” Photoacoustics, 28 100413 https://doi.org/10.1016/j.pacs.2022.100413 (2022). Google Scholar

19. 

G. Chen et al., “High-precision neural stimulation by a highly efficient candle soot fiber optoacoustic emitter,” Front. Neurosci., 16 1005810 https://doi.org/10.3389/fnins.2022.1005810 1662-453X (2022). Google Scholar

20. 

H. Guo et al., “Photoacoustic-triggered nanomedicine delivery to internal organs using a dual-wavelength laparoscope,” J. Biophotonics, 15 (9), e202200116 https://doi.org/10.1002/jbio.202200116 (2022). Google Scholar

21. 

J. Zhang et al., “Snapshot time-reversed ultrasonically encoded optical focusing guided by time-reversed photoacoustic wave,” Photoacoustics, 26 100352 https://doi.org/10.1016/j.pacs.2022.100352 (2022). Google Scholar

22. 

D. Thompson et al., “Laser-induced ultrasound transmitters for large-volume ultrasound tomography,” Photoacoustics, 25 100312 https://doi.org/10.1016/j.pacs.2021.100312 (2022). Google Scholar

23. 

L. Zhu et al., “Surgical navigation system for spinal surgery with photoacoustic endoscopy,” in IEEE Int. Ultrasonics Symp. (IUS), 1 –3 (2022). https://doi.org/10.1109/IUS54386.2022.9957853 Google Scholar

24. 

Z. Zhang et al., “Photoacoustic imaging of tumor vascular involvement and surgical margin pathology for feedback-guided intraoperative tumor resection,” Appl. Phys. Lett., 121 (19), 193702 https://doi.org/10.1063/5.0128076 APPLAB 0003-6951 (2022). Google Scholar

25. 

L. A. Kasatkina et al., “Optogenetic manipulation and photoacoustic imaging using a near-infrared transgenic mouse model,” Nat. Commun., 13 (1), 2813 https://doi.org/10.1038/s41467-022-30547-6 NCAOBW 2041-1723 (2022). Google Scholar

26. 

T. Zhao et al., “Ultrathin, high-speed, all-optical photoacoustic endomicroscopy probe for guiding minimally invasive surgery,” Biomed. Opt. Express, 13 (8), 4414 –4428 https://doi.org/10.1364/BOE.463057 BOEICL 2156-7085 (2022). Google Scholar

27. 

M. Shi et al., “Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets,” Photoacoustics, 26 100351 https://doi.org/10.1016/j.pacs.2022.100351 (2022). Google Scholar

28. 

Z. Fang et al., “A mixer-supported adaptable silicon-integrated edge coherent photoacoustic system-on-chip for precise in vivo sensing and enhanced bio-imaging,” in IEEE Int. Symp. Circuits and Syst. (ISCAS), 51 –54 (2022). https://doi.org/10.1109/ISCAS48785.2022.9937818 Google Scholar

29. 

Z. Fang et al., “A quadrature adaptive coherent Lock-in chip-based sensor for accurate photoacoustic detection,” in 2020 IEEE Int. Symp. on Circuits and Syst. (ISCAS), (2020). https://doi.org/10.1109/ISCAS45731.2020.9180612 Google Scholar

30. 

Z. Cheng et al., “Photoacoustic maximum amplitude projection microscopy by ultra-low data sampling,” Opt. Lett., 48 (7), 1718 –1721 https://doi.org/10.1364/OL.485628 OPLEDP 0146-9592 (2023). Google Scholar

31. 

D. Jiang et al., “Hand-held free-scan 3D photoacoustic tomography with global positioning system,” J. Appl. Phys., 132 (7), 074904 https://doi.org/10.1063/5.0095919 JAPIAU 0021-8979 (2022). Google Scholar

32. 

Z. Xu et al., “Visualizing tumor angiogenesis and boundary with polygon-scanning multiscale photoacoustic microscopy,” Photoacoustics, 26 100342 https://doi.org/10.1016/j.pacs.2022.100342 (2022). Google Scholar

33. 

M. Chen et al., “High-speed functional photoacoustic microscopy using a water-immersible two-axis torsion-bending scanner,” Photoacoustics, 24 100309 https://doi.org/10.1016/j.pacs.2021.100309 (2021). Google Scholar

34. 

Y. Wang et al., “Low-cost photoacoustic tomography system enabled by frequency-division multiplexing,” in 2021 IEEE Int. Symp. Circuits and Syst. (ISCAS), (2021). https://doi.org/10.1109/ISCAS51556.2021.9401775 Google Scholar

35. 

D. Jiang et al., “Low-cost photoacoustic tomography system based on multi-channel delay-line module,” IEEE Trans. Circuits Syst. II Express Briefs, 66 (5), 778 –782 https://doi.org/10.1109/TCSII.2019.2908432 (2019). Google Scholar

36. 

D. Jiang et al., “Programmable acoustic delay-line enabled low-cost photoacoustic tomography system,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 69 2075 –2084 https://doi.org/10.1109/TUFFC.2022.3166841 (2022). Google Scholar

37. 

H. Lan et al., “Deep learning enabled real-time photoacoustic tomography system via single data acquisition channel,” Photoacoustics, 22 100270 https://doi.org/10.1016/j.pacs.2021.100270 (2021). Google Scholar

38. 

Y. Shen et al., “Accelerating model-based photoacoustic image reconstruction in vivo based on s-wave,” in 2022 IEEE Int. Ultrasonics Symp. (IUS), (2022). https://doi.org/10.1109/IUS54386.2022.9958557 Google Scholar

39. 

Z. Gao et al., “Implementation and comparison of three image reconstruction algorithms in FPGA towards palm-size photoacoustic tomography,” IEEE Sens. J., 23 8605 –8612 https://doi.org/10.1109/JSEN.2023.3252814 ISJEAZ 1530-437X (2023). Google Scholar

40. 

Y. Wang et al., “Visible CCD camera-guided photoacoustic imaging system for precise navigation during functional rat brain imaging,” Biosensors, 13 (1), 107 https://doi.org/10.3390/bios13010107 BISSED 0265-928X (2023). Google Scholar

41. 

D. Jiang et al., “Hand-held 3D photoacoustic imaging system with GPS,” in IEEE Int. Ultrasonics Symp. (IUS), 1 –4 (2022). https://doi.org/10.1109/IUS54386.2022.9957259 Google Scholar

42. 

J. Chen et al., “Freehand scanning photoacoustic microscopy with simultaneous localization and mapping,” Photoacoustics, 28 100411 https://doi.org/10.1016/j.pacs.2022.100411 (2022). Google Scholar

43. 

L. Wang et al., “Optical-visualized photoacoustic tomographic navigation,” Appl. Phys. Lett., 122 (2), 023701 https://doi.org/10.1063/5.0135655 APPLAB 0003-6951 (2023). Google Scholar

44. 

R. Shintate et al., “High-speed optical resolution photoacoustic microscopy with MEMS scanner using a novel and simple distortion correction method,” Sci. Rep., 12 (1), 9221 https://doi.org/10.1038/s41598-022-12865-3 SRCEC3 2045-2322 (2022). Google Scholar

45. 

J. Zhang et al., “Organ‐PAM: photoacoustic microscopy of whole‐organ multiset vessel systems,” Laser Photonics Rev., 17 (7), 2201031 https://doi.org/10.1002/lpor.202201031 1863-8899 (2023). Google Scholar

46. 

X. Zhu et al., “Real-time whole-brain imaging of hemodynamics and oxygenation at micro-vessel resolution with ultrafast wide-field photoacoustic microscopy,” Light Sci. Appl., 11 (1), 138 https://doi.org/10.1038/s41377-022-00836-2 (2022). Google Scholar

47. 

H. Guo et al., “Detachable head-mounted photoacoustic microscope in freely moving mice,” Opt. Lett., 46 (24), 6055 –6058 https://doi.org/10.1364/OL.444226 OPLEDP 0146-9592 (2021). Google Scholar

48. 

T. Hirasawa et al., “Spectroscopic photoacoustic microscopic imaging during single spatial scan using broadband excitation light pulses with wavelength-dependent time delay,” Photoacoustics, 26 100364 https://doi.org/10.1016/j.pacs.2022.100364 (2022). Google Scholar

49. 

T. Duan et al., “Detection of weak optical absorption by optical-resolution photoacoustic microscopy,” Photoacoustics, 25 100335 https://doi.org/10.1016/j.pacs.2022.100335 (2022). Google Scholar

50. 

D. Ke et al., “Miniature fiber scanning probe for flexible forward-view photoacoustic endoscopy,” Appl. Phys. Lett., 122 (12), 123703 https://doi.org/10.1063/5.0142792 APPLAB 0003-6951 (2023). Google Scholar

51. 

Y. Zhao et al., “Design of continuously-adjustable light-scanning handheld probe for photoacoustic imaging,” IEEE Photonics J., 13 (5), 4100106 https://doi.org/10.1109/JPHOT.2021.3115966 (2021). Google Scholar

52. 

G. Nteroli et al., “Enhanced resolution optoacoustic microscopy using a picosecond high repetition rate Q-switched microchip laser,” J. Biomed. Opt., 27 (11), 110501 https://doi.org/10.1117/1.JBO.27.11.110501 JBOPFO 1083-3668 (2022). Google Scholar

53. 

Y. Notsuka et al., “Improvement of spatial resolution in photoacoustic microscopy using transmissive adaptive optics with a low-frequency ultrasound transducer,” Opt. Express, 30 (2), 2933 –2948 https://doi.org/10.1364/OE.446309 OPEXFF 1094-4087 (2022). Google Scholar

54. 

R. Cao et al., “Optical-resolution photoacoustic microscopy with a needle-shaped beam,” Nat. Photonics, 17 (1), 89 –95 https://doi.org/10.1038/s41566-022-01112-w NPAHBY 1749-4885 (2022). Google Scholar

55. 

X. Li et al., “Low-cost high-resolution photoacoustic microscopy of blood oxygenation with two laser diodes,” Biomed. Opt. Express, 13 (7), 3893 –3903 https://doi.org/10.1364/BOE.458645 BOEICL 2156-7085 (2022). Google Scholar

56. 

Y. Guo, B. Li and X. Yin, “Dual-compressed photoacoustic single-pixel imaging,” Natl. Sci. Rev., 10 (1), nwac058 https://doi.org/10.1093/nsr/nwac058 (2023). Google Scholar

57. 

S. Van Heumen et al., “LED-based photoacoustic imaging for preoperative visualization of lymphatic vessels in patients with secondary limb lymphedema,” Photoacoustics, 29 100446 https://doi.org/10.1016/j.pacs.2022.100446 (2023). Google Scholar

58. 

L. Deng et al., “Compact long-working-distance laser-diode-based photoacoustic microscopy with a reflective objective,” Chin. Opt. Lett., 19 (7), 071701 https://doi.org/10.3788/COL202119.071701 CJOEE3 1671-7694 (2021). Google Scholar

59. 

S. Liang et al., “Cerebrovascular imaging in vivo by non-contact photoacoustic microscopy based on photoacoustic remote sensing with a laser diode for interrogation,” Opt. Lett., 47 (1), 18 –21 https://doi.org/10.1364/OL.446787 OPLEDP 0146-9592 (2022). Google Scholar

60. 

X. Song et al., “Multiscale photoacoustic imaging without motion using single-pixel imaging,” J. Biophotonics, 15 (3), e202100299 https://doi.org/10.1002/jbio.202100299 (2022). Google Scholar

61. 

N. Chen et al., “Video-rate high-resolution single-pixel nonscanning photoacoustic microscopy,” Biomed. Opt. Express, 13 (7), 3823 –3835 https://doi.org/10.1364/BOE.459363 BOEICL 2156-7085 (2022). Google Scholar

62. 

Y. Ren et al., “Optical fiber-based handheld polarized photoacoustic computed tomography for detecting anisotropy of tissues,” Quantum Imaging Med. Surg., 12 (4), 2238 –2246 https://doi.org/10.21037/qims-21-658 (2022). Google Scholar

63. 

L. Mukhangaliyeva et al., “Deformable mirror-based photoacoustic remote sensing (PARS) microscopy for depth scanning,” Biomed. Opt. Express, 13 (11), 5643 –5653 https://doi.org/10.1364/BOE.471770 BOEICL 2156-7085 (2022). Google Scholar

64. 

H. Lee et al., “Nanosecond SRS fiber amplifier for label-free near-infrared photoacoustic microscopy of lipids,” Photoacoustics, 25 100331 https://doi.org/10.1016/j.pacs.2022.100331 (2022). Google Scholar

65. 

K. Tachi et al., “Chromatic-aberration-free multispectral optical-resolution photoacoustic microscopy using reflective optics and a supercontinuum light source,” Appl. Opt., 60 (31), 9651 –9658 https://doi.org/10.1364/AO.434817 APOPAI 0003-6935 (2021). Google Scholar

66. 

X. Luo et al., “Broadband high-frequency ultrasonic transducer based functional photoacoustic mesoscopy for psoriasis progression,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 69 (6), 1926 –1931 https://doi.org/10.1109/TUFFC.2021.3136870 ITUCER 0885-3010 (2022). Google Scholar

67. 

Y.-H. Liu et al., “Sensitive ultrawideband transparent PVDF-ITO ultrasound detector for optoacoustic microscopy,” Opt. Lett., 47 (16), 4163 –4166 https://doi.org/10.1364/OL.462369 OPLEDP 0146-9592 (2022). Google Scholar

68. 

M. Ghavami, A. K. Ilkhechi and R. Zemp, “Flexible transparent CMUT arrays for photoacoustic tomography,” Opt. Express, 30 (10), 15877 –15894 https://doi.org/10.1364/OE.455796 OPEXFF 1094-4087 (2022). Google Scholar

69. 

Z. Yan and J. Zou, “Large-scale surface-micromachined optical ultrasound transducer (SMOUT) array for photoacoustic computed tomography,” Opt. Express, 30 (11), 19069 –19080 https://doi.org/10.1364/OE.458464 OPEXFF 1094-4087 (2022). Google Scholar

70. 

L. Yang et al., “Miniaturized fiber optic ultrasound sensor with multiplexing for photoacoustic imaging,” Photoacoustics, 28 100421 https://doi.org/10.1016/j.pacs.2022.100421 (2022). Google Scholar

71. 

J. Cai et al., “Beyond fundamental resonance mode: high-order multi-band ALN PMUT for in vivo photoacoustic imaging,” Microsyst. Nanoeng., 8 116 https://doi.org/10.1038/s41378-022-00426-7 (2022). Google Scholar

72. 

Y. Shan et al., “Spectroscopically resolved photoacoustic microscopy using a broadband surface plasmon resonance sensor,” Appl. Phys. Lett., 120 (12), 123701 https://doi.org/10.1063/5.0085321 APPLAB 0003-6951 (2022). Google Scholar

73. 

X. Luo et al., “Stack-layer dual-element ultrasonic transducer for broadband functional photoacoustic tomography,” Front. Bioeng. Biotechnol., 9 786376 https://doi.org/10.3389/fbioe.2021.786376 (2021). Google Scholar

74. 

M. Chen et al., “High-speed wide-field photoacoustic microscopy using a cylindrically focused transparent high-frequency ultrasound transducer,” Photoacoustics, 28 100417 https://doi.org/10.1016/j.pacs.2022.100417 (2022). Google Scholar

75. 

M. S. Osman et al., “A novel matching layer design for improving the performance of transparent ultrasound transducers,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 69 (9), 2672 –2680 https://doi.org/10.1109/TUFFC.2022.3195998 ITUCER 0885-3010 (2022). Google Scholar

76. 

H. Peng et al., “Photoacoustic microscopy based on transparent piezoelectric ultrasound transducers,” J. Innov. Opt. Health Sci., 16 2330001 https://doi.org/10.1142/S179354582330001X (2023). Google Scholar

77. 

Y.-H. Liu et al., “Transparent flexible piezoelectric ultrasound transducer for photoacoustic imaging system,” IEEE Sens. J., 22 (3), 2070 –2077 https://doi.org/10.1109/JSEN.2021.3135872 ISJEAZ 1530-437X (2022). Google Scholar

78. 

D. Zhang et al., “An ellipsoidal focused ultrasound transducer for extend-focus photoacoustic microscopy,” IEEE Trans. Biomed. Eng., 68 (12), 3748 –3752 https://doi.org/10.1109/TBME.2021.3078729 IEBEAX 0018-9294 (2021). Google Scholar

79. 

J. Ma et al., “Transparent microfiber Fabry-Perot ultrasound sensor with needle-shaped focus for multiscale photoacoustic imaging,” Photoacoustics, 30 100482 https://doi.org/10.1016/j.pacs.2023.100482 (2023). Google Scholar

80. 

A. Gholampour et al., “Multiperspective photoacoustic imaging using spatially diverse CMUTs,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 70 (1), 16 –24 https://doi.org/10.1109/TUFFC.2022.3220999 ITUCER 0885-3010 (2023). Google Scholar

81. 

Q. Zheng et al., “Thin ceramic PZT dual- and multi-frequency pMUT arrays for photoacoustic imaging,” Microsyst. Nanoeng., 8 122 https://doi.org/10.1038/s41378-022-00449-0 (2022). Google Scholar

82. 

L. Fu et al., “Photoacoustic imaging of posterior periodontal pocket using a commercial hockey-stick transducer,” J. Biomed. Opt., 27 (5), 056005 https://doi.org/10.1117/1.JBO.27.5.056005 JBOPFO 1083-3668 (2022). Google Scholar

83. 

A. K. Ustun and J. Zou, “A photoacoustic sensing probe based on silicon acoustic delay lines,” IEEE Sens. J., 21 (19), 21371 –21377 https://doi.org/10.1109/JSEN.2021.3103932 ISJEAZ 1530-437X (2021). Google Scholar

84. 

Y. Wang et al., “Photoacoustic dual-mode microsensor based on PMUT technology,” in IEEE Int. Symp. Circuits and Syst. (ISCAS), 2836 –2840 (2022). https://doi.org/10.1109/ISCAS48785.2022.9937967 Google Scholar

85. 

J. Cai et al., “Photoacoustic and ultrosound dual-modality endoscopic imaging based on ALN PMUT array,” in IEEE 35th Int. Conf. Micro Electro Mech. Syst. Conf. (MEMS), 412 –415 (2022). https://doi.org/10.1109/MEMS51670.2022.9699511 Google Scholar

86. 

J. Czuchnowski and R. Prevedel, “Comparing free-space and fiber-coupled detectors for Fabry-Perot-based all-optical photoacoustic tomography,” J. Biomed. Opt., 27 (4), 046001 https://doi.org/10.1117/1.JBO.27.4.046001 JBOPFO 1083-3668 (2022). Google Scholar

87. 

J. Saucourt et al., “Fast interrogation wavelength tuning for all-optical photoacoustic imaging,” Opt. Express, 31 (7), 11164 –11172 https://doi.org/10.1364/OE.476747 OPEXFF 1094-4087 (2023). Google Scholar

88. 

W. Song et al., “Toward ultrasensitive, broadband, reflection‐mode in vivo photoacoustic microscopy using a bare glass,” Laser Photonics Rev., 17 (1), 2200030 https://doi.org/10.1002/lpor.202200030 1863-8899 (2022). Google Scholar

89. 

P. Zhang et al., “All-optical ultrasonic detector based on differential interference,” Opt. Lett., 47 (18), 4790 –4793 https://doi.org/10.1364/OL.470486 OPLEDP 0146-9592 (2022). Google Scholar

90. 

F. Yang, Z. Chen and D. Xing, “All-optical noncontact phase-domain photoacoustic elastography,” Opt. Lett., 46 (19), 5063 –5066 https://doi.org/10.1364/OL.438599 OPLEDP 0146-9592 (2021). Google Scholar

91. 

X. Jiang et al., “A total-internal-reflection-based Fabry-Perot resonator for ultra-sensitive wideband ultrasound and photoacoustic applications,” Photoacoustics, 30 100466 https://doi.org/10.1016/j.pacs.2023.100466 (2023). Google Scholar

92. 

A. Doug Deen et al., “Spectroscopic thermo-elastic optical coherence tomography for tissue characterization,” Biomed. Opt. Express, 13 (3), 1430 –1446 https://doi.org/10.1364/BOE.447911 BOEICL 2156-7085 (2022). Google Scholar

93. 

Z. Ma et al., “Spectral interference contrast based non-contact photoacoustic microscopy realized by SDOCT,” Opt. Lett., 47 (11), 2895 –2898 https://doi.org/10.1364/OL.458383 OPLEDP 0146-9592 (2022). Google Scholar

94. 

Y. Hazan et al., “Silicon-photonics acoustic detector for optoacoustic micro-tomography,” Nat. Commun., 13 (1), 1488 https://doi.org/10.1038/s41467-022-29179-7 NCAOBW 2041-1723 (2022). Google Scholar

95. 

W. Song et al., “Ultrasensitive broadband photoacoustic microscopy based on common-path interferometric surface plasmon resonance sensing,” Photoacoustics, 28 100419 https://doi.org/10.1016/j.pacs.2022.100419 (2022). Google Scholar

96. 

F. Yang et al., “Broadband surface plasmon resonance sensor for fast spectroscopic photoacoustic microscopy,” Photoacoustics, 24 100305 https://doi.org/10.1016/j.pacs.2021.100305 (2021). Google Scholar

97. 

S. Choi et al., “Deep learning enhances multiparametric dynamic volumetric photoacoustic computed tomography in vivo (DL-PACT),” Adv. Sci., 10 2202089 https://doi.org/10.1002/advs.202202089 1936-6612 (2022). Google Scholar

98. 

J. Zhang et al., “Limited-view photoacoustic imaging reconstruction with dual domain inputs based on mutual inforamtion,” in IEEE 18th Int. Symp. Biomed. Imaging (ISBI), 1522 –1526 (2021). https://doi.org/10.1109/ISBI48211.2021.9433949 Google Scholar

99. 

D. Seong et al., “Three-dimensional reconstructing undersampled photoacoustic microscopy images using deep learning,” Photoacoustics, 29 100429 https://doi.org/10.1016/j.pacs.2022.100429 (2022). Google Scholar

100. 

J. Gong et al., “Deep learning regularized acceleration for photoacoustic image reconstruction,” in IEEE Int. Ultrasonics Symp. (IUS), 1 –4 (2021). https://doi.org/10.1109/IUS52206.2021.9593560 Google Scholar

101. 

M. Guo et al., “AS-net: fast photoacoustic reconstruction with multi-feature fusion from sparse data,” IEEE Trans. Comput. Imaging, 8 215 –223 https://doi.org/10.1109/TCI.2022.3155379 (2021). Google Scholar

102. 

Z. Zhang et al., “Deep and domain transfer learning aided photoacoustic microscopy: acoustic resolution to optical resolution,” IEEE Trans. Med. Imaging, 41 3636 –3648 https://doi.org/10.1109/TMI.2022.3192072 ITMID4 0278-0062 (2022). Google Scholar

103. 

S.-W. Cheng et al., “High-resolution photoacoustic microscopy with deep penetration through learning,” Photoacoustics, 25 100314 https://doi.org/10.1016/j.pacs.2021.100314 (2021). Google Scholar

104. 

H. Zhang et al., “Deep-E: a fully-dense neural network for improving the elevation resolution in linear-array-based photoacoustic tomography,” IEEE Trans. Med. Imaging, 41 1279 –1288 https://doi.org/10.1109/TMI.2021.3137060 ITMID4 0278-0062 (2021). Google Scholar

105. 

J. Kim et al., “Deep learning acceleration of multiscale superresolution localization photoacoustic imaging,” Light Sci. Appl., 11 131 https://doi.org/10.1038/s41377-022-00820-w (2022). Google Scholar

106. 

C. Dehner et al., “Deep-learning-based electrical noise removal enables high spectral optoacoustic contrast in deep tissue,” IEEE Trans. Med. Imaging, 41 3182 –3193 https://doi.org/10.1109/TMI.2022.3180115 ITMID4 0278-0062 (2021). Google Scholar

107. 

Y. Ma et al., “Cascade neural approximating for few-shot super-resolution photoacoustic angiography,” Appl. Phys. Lett., 121 103701 https://doi.org/10.1063/5.0100424 APPLAB 0003-6951 (2022). Google Scholar

108. 

D. He et al., “De-noising of photoacoustic microscopy images by attentive generative adversarial network,” IEEE Trans. Med. Imaging, 42 1349 –1362 https://doi.org/10.1109/TMI.2022.3227105 ITMID4 0278-0062 (2022). Google Scholar

109. 

F. Feng et al., “High-fidelity deconvolution for acoustic-resolution photoacoustic microscopy enabled by convolutional neural networks,” Photoacoustics, 26 100360 https://doi.org/10.1016/j.pacs.2022.100360 (2022). Google Scholar

110. 

Z. Zhang et al., “Learning-based algorithm for real imaging system enhancement: acoustic resolution to optical resolution photoacoustic microscopy,” in IEEE Int. Symp. Circuits and Syst. (ISCAS), 2458 –2462 (2022). https://doi.org/10.1109/ISCAS48785.2022.9937914 Google Scholar

111. 

A. Madasamy et al., “Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging,” J. Biomed. Opt., 27 106004 https://doi.org/10.1117/1.JBO.27.10.106004 JBOPFO 1083-3668 (2022). Google Scholar

112. 

S. Jeon et al., “A deep learning-based model that reduces speed of sound aberrations for improved in vivo photoacoustic imaging,” IEEE Trans. Image Process., 30 8773 –8784 https://doi.org/10.1109/TIP.2021.3120053 IIPRE4 1057-7149 (2021). Google Scholar

113. 

Y. Zhou, N. Sun and S. Hu, “Deep learning-powered bessel-beam multiparametric photoacoustic microscopy,” IEEE Trans. Med. Imaging, 41 (12), 3544 –3551 https://doi.org/10.1109/TMI.2022.3188739 (2022). Google Scholar

114. 

O. Gulenko et al., “Deep-learning-based algorithm for the removal of electromagnetic interference noise in photoacoustic endoscopic image processing,” Sensors, 22 3961 https://doi.org/10.3390/s22103961 SNSRES 0746-9462 (2022). Google Scholar

115. 

S. Zheng et al., “A deep learning method for motion artifact correction in intravascular photoacoustic image sequence,” IEEE Trans. Med. Imaging, 42 66 –78 https://doi.org/10.1109/TMI.2022.3202910 ITMID4 0278-0062 (2022). Google Scholar

116. 

M. Lu et al., “Artifact removal in photoacoustic tomography with an unsupervised method,” Biomed. Opt. Express, 12 (10), 6284 –6299 https://doi.org/10.1364/BOE.434172 BOEICL 2156-7085 (2021). Google Scholar

117. 

J. Grhl et al., “Semantic segmentation of multispectral photoacoustic images using deep learning,” Photoacoustics, 26 100341 https://doi.org/10.1016/j.pacs.2022.100341 (2021). Google Scholar

118. 

J. Li et al., “Deep learning-based quantitative optoacoustic tomography of deep tissues in the absence of labeled experimental data,” Optica, 9 32 –41 https://doi.org/10.1364/OPTICA.438502 (2021). Google Scholar

119. 

Y.-K. Zou et al., “Ultrasound-enhanced Unet model for quantitative photoacoustic tomography of ovarian lesions,” Photoacoustics, 28 100420 https://doi.org/10.1016/j.pacs.2022.100420 (2022). Google Scholar

120. 

H. Zhao et al., “Deep learning-based optical-resolution photoacoustic microscopy for in vivo 3D microvasculature imaging and segmentation,” Adv. Intell. Syst., 4 2200004 https://doi.org/10.1002/aisy.202200004 (2022). Google Scholar

121. 

M. Schellenberg et al., “Photoacoustic image synthesis with generative adversarial networks,” Photoacoustics, 28 100402 https://doi.org/10.1016/j.pacs.2022.100402 (2021). Google Scholar

122. 

L. Kang et al., “Deep learning enables ultraviolet photoacoustic microscopy based histological imaging with near real-time virtual staining,” Photoacoustics, 25 100308 https://doi.org/10.1016/j.pacs.2021.100308 (2021). Google Scholar

123. 

R. Cao et al., “Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy,” Nat. Biomed. Eng., 7 124 –134 https://doi.org/10.1038/s41551-022-00940-z (2022). Google Scholar

124. 

M. Dantuma et al., “Suite of 3D test objects for performance assessment of hybrid photoacoustic-ultrasound breast imaging systems,” J. Biomed. Opt., 27 (7), 074709 https://doi.org/10.1117/1.JBO.27.7.074709 JBOPFO 1083-3668 (2022). Google Scholar

125. 

E. Zheng et al., “Second-generation dual scan mammoscope with photoacoustic, ultrasound, and elastographic imaging capabilities,” Front. Oncol., 11 779071 https://doi.org/10.3389/fonc.2021.779071 (2021). Google Scholar

126. 

E. Zheng et al., “Volumetric tri-modal imaging with combined photoacoustic, ultrasound, and shear wave elastography,” J. Appl. Phys., 132 (3), 034902 https://doi.org/10.1063/5.0093619 JAPIAU 0021-8979 (2022). Google Scholar

127. 

L. Menozzi et al., “Three-dimensional non-invasive brain imaging of ischemic stroke by integrated photoacoustic, ultrasound and angiographic tomography (PAUSAT),” Photoacoustics, 29 100444 https://doi.org/10.1016/j.pacs.2022.100444 (2023). Google Scholar

128. 

S. Mirg et al., “Awake mouse brain photoacoustic and optical imaging through a transparent ultrasound cranial window,” Opt. Lett., 47 (5), 1121 –1124 https://doi.org/10.1364/OL.450648 OPLEDP 0146-9592 (2022). Google Scholar

129. 

Q. Chen et al., “Dual-model wearable photoacoustic microscopy and electroencephalograph: study of neurovascular coupling in anesthetized and freely moving rats,” Biomed. Opt. Express, 12 (10), 6614 –6628 https://doi.org/10.1364/BOE.438596 BOEICL 2156-7085 (2021). Google Scholar

130. 

S. Na, Y. Zhang and L. V. Wang, “Cross‐ray ultrasound tomography and photoacoustic tomography of cerebral hemodynamics in rodents,” Adv. Sci., 9 (25), 2201104 https://doi.org/10.1002/advs.202201104 1936-6612 (2022). Google Scholar

131. 

R. Ni et al., “Coregistered transcranial optoacoustic and magnetic resonance angiography of the human brain,” Opt. Lett., 48 (3), 648 –651 https://doi.org/10.1364/OL.475578 OPLEDP 0146-9592 (2023). Google Scholar

132. 

Y. Zhang and L. Wang, “Video-rate full-ring ultrasound and photoacoustic computed tomography with real-time sound speed optimization,” Biomed. Opt. Express, 13 (8), 4398 –4413 https://doi.org/10.1364/BOE.464360 BOEICL 2156-7085 (2022). Google Scholar

133. 

B. S. Restall et al., “Multimodal 3D photoacoustic remote sensing and confocal fluorescence microscopy imaging,” J. Biomed. Opt., 26 (9), 096501 https://doi.org/10.1117/1.JBO.26.9.096501 JBOPFO 1083-3668 (2021). Google Scholar

134. 

T. Zhao et al., “Video-rate dual-modal photoacoustic and fluorescence imaging through a multimode fibre towards forward-viewing endomicroscopy,” Photoacoustics, 25 100323 https://doi.org/10.1016/j.pacs.2021.100323 (2022). Google Scholar

135. 

V. P. Nguyen et al., “In vivo subretinal ARPE-19 cell tracking using indocyanine green contrast-enhanced multimodality photoacoustic microscopy, optical coherence tomography, and fluorescence imaging for regenerative medicine,” Transl. Vis. Sci. Technol., 10 (10), 10 https://doi.org/10.1167/tvst.10.10.10 (2021). Google Scholar

136. 

X. Liu et al., “Noninvasive photoacoustic computed tomography/ultrasound imaging to identify high-risk atherosclerotic plaques,” Eur. J. Nucl. Med. Mol. Imaging, 49 (13), 4601 –4615 https://doi.org/10.1007/s00259-022-05911-9 (2022). Google Scholar

137. 

C. Chen et al., “Activity of keloids evaluated by multimodal photoacoustic/ultrasonic imaging system,” Photoacoustics, 24 100302 https://doi.org/10.1016/j.pacs.2021.100302 (2021). Google Scholar

138. 

G. Zhang et al., “Photoacoustic/ultrasound dual modality imaging aided by acoustic reflectors,” Chin. Opt. Lett., 19 (12), 121702 https://doi.org/10.3788/COL202119.121702 (2021). Google Scholar

139. 

Y. Chen et al., “Single optical fiber based forward-viewing all-optical ultrasound self-transceiving probe,” Opt. Lett., 48 (4), 868 –871 https://doi.org/10.1364/OL.479718 OPLEDP 0146-9592 (2023). Google Scholar

140. 

H. Liu et al., “Ultrasound pulse generation through continuous-wave laser excited thermo-cavitation for all-optical ultrasound imaging,” APL Photonics, 8 (4), 046102 https://doi.org/10.1063/5.0142684 (2023). Google Scholar

141. 

G. Jin et al., “A signal domain object segmentation method for ultrasound and photoacoustic computed tomography,” IEEE Trans. Ultrason. Ferroelectr. Freq. Control., 70 253 –265 https://doi.org/10.1109/TUFFC.2022.3232174 (2022). Google Scholar

142. 

S. Zhao et al., “Hybrid photoacoustic and fast super-resolution ultrasound imaging,” Nat. Commun., 14 (1), 2191 https://doi.org/10.1038/s41467-023-37680-w NCAOBW 2041-1723 (2023). Google Scholar

143. 

Y. Zhao et al., “Ultrasound-guided adaptive photoacoustic tomography,” Opt. Lett., 47 (15), 3960 –3963 https://doi.org/10.1364/OL.462799 OPLEDP 0146-9592 (2022). Google Scholar

144. 

S. Zhang et al., “MRI information-based correction and restoration of photoacoustic tomography,” IEEE Trans. Med. Imaging, 41 (9), 2543 –2555 https://doi.org/10.1109/TMI.2022.3165839 ITMID4 0278-0062 (2022). Google Scholar

145. 

Y. Zhang and L. Wang, “Adaptive dual-speed ultrasound and photoacoustic computed tomography,” Photoacoustics, 27 100380 https://doi.org/10.1016/j.pacs.2022.100380 (2022). Google Scholar

146. 

Q. C. Zhu, “A review of co-registered transvaginal photoacoustic and ultrasound imaging for ovarian cancer diagnosis,” Curr. Opin. Biomed. Eng., 22 100381 https://doi.org/10.1016/j.cobme.2022.100381 (2022). Google Scholar

147. 

Z. Xie et al., “Circular array transducer based-photoacoustic/ultrasonic endoscopic imaging with tunable ring-beam excitation,” Photoacoustics, 29 100441 https://doi.org/10.1016/j.pacs.2022.100441 (2023). Google Scholar

148. 

M. Kim et al., “Intra-instrument channel workable, optical-resolution photoacoustic and ultrasonic mini-probe system for gastrointestinal endoscopy,” Photoacoustics, 26 100346 https://doi.org/10.1016/j.pacs.2022.100346 (2022). Google Scholar

149. 

K. Kim et al., “Tapered catheter-based transurethral photoacoustic and ultrasonic endoscopy of the urinary system,” Opt. Express, 30 (15), 26169 –26181 https://doi.org/10.1364/OE.461855 OPEXFF 1094-4087 (2022). Google Scholar

150. 

X. Wen et al., “High-fluence relay-based disposable photoacoustic-ultrasonic endoscopy for in vivo anatomical imaging of gastrointestinal tract,” Photonics Res., 11 (1), 55 –64 https://doi.org/10.1364/PRJ.470737 (2023). Google Scholar

151. 

Y. Zhu et al., “Prototype endoscopic photoacoustic-ultrasound balloon catheter for characterizing intestinal obstruction,” Biomed. Opt. Express, 13 (6), 3355 –3365 https://doi.org/10.1364/BOE.456672 BOEICL 2156-7085 (2022). Google Scholar

Biography

Daohuai Jiang received his PhD in microelectronics and solid-state electronics from the University of Chinese Academy of Sciences, Beijing, China, in 2023. During his PhD study, he researched photoacoustic tomography system design and its applications in HISLab of ShanghaiTech University. He joined the College of Photonic and Electronic Engineering, Fujian Normal University as an associate professor in August 2023. His research interests include novel photoacoustic imaging system design, biomedical signal processing, and its clinical applications.

Luyao Zhu is currently working toward her master's degree in electronic science and technology with ShanghaiTech University, Shanghai, China. She is now mainly focusing on emerging applications of photoacoustic imaging, such as spinal surgery guidance and melanin removal evaluation.

Shangqing Tong graduated from the School of Internet of Things Engineering at Jiangnan University in 2021. He is now a graduate student in ShanghaiTech University. His research centers on solving inverse problems in photoacoustic imaging using deep learning approaches and computer vision in biomedical applications.

Yuting Shen is now a graduate student in computer science at ShanghaiTech University. Her research focuses on optimizing photoacoustic imaging reconstruction algorithms for heterogeneous media to enhance image quality. Her research interest includes developing photoacoustic non-line-of-sight algorithms and proposing the utilization of s-wave for simulation acceleration.

Feng Gao received his bachelor’s degree at Xi'an University of Posts and Telecommunications in 2009 and his master’s degree at Xidian University in 2012. His research interests are image processing and digital circuit design. In addition, he devotes his efforts to the development of underlying technology of photoacoustic imaging and promote the clinical application of photoacoustic technology.

Fei Gao, PhD (Nanyang Technological University), is an assistant professor in ShanghaiTech University and PI of Hybrid Imaging System Laboratory (HISLab: www.hislab.cn). He is currently serving as associate editor of several journals. He also serves as TPC member of IEEE Ultrasonics Symposium. His interdisciplinary research topics include photoacoustic (PA) imaging physics, biomedical circuits and systems, algorithm and AI, as well as close collaboration with doctors to address unmet clinical needs.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Daohuai Jiang, Luyao Zhu, Shangqing Tong, Yuting Shen, Feng Gao, and Fei Gao "Photoacoustic imaging plus X: a review," Journal of Biomedical Optics 29(S1), S11513 (28 December 2023). https://doi.org/10.1117/1.JBO.29.S1.S11513
Received: 6 September 2023; Accepted: 11 December 2023; Published: 28 December 2023
Advertisement
Advertisement
KEYWORDS
Imaging systems

Ultrasonography

Sensors

Biomedical optics

Photoacoustic imaging

Tissues

Biological imaging

RELATED CONTENT


Back to Top