Unidirectional imagers form images of input objects only in one direction, e.g., from field-of-view (FOV) A to FOV B, while blocking the image formation in the reverse direction, from FOV B to FOV A. Here, we report unidirectional imaging under spatially partially coherent light and demonstrate high-quality imaging only in the forward direction (A → B) with high power efficiency while distorting the image formation in the backward direction (B → A) along with low power efficiency. Our reciprocal design features a set of spatially engineered linear diffractive layers that are statistically optimized for partially coherent illumination with a given phase correlation length. Our analyses reveal that when illuminated by a partially coherent beam with a correlation length of ≥∼1.5λ, where λ is the wavelength of light, diffractive unidirectional imagers achieve robust performance, exhibiting asymmetric imaging performance between the forward and backward directions—as desired. A partially coherent unidirectional imager designed with a smaller correlation length of <1.5λ still supports unidirectional image transmission but with a reduced figure of merit. These partially coherent diffractive unidirectional imagers are compact (axially spanning <75λ), polarization-independent, and compatible with various types of illumination sources, making them well-suited for applications in asymmetric visual information processing and communication.
The traditional histochemical staining of autopsy tissue samples usually suffers from staining artifacts due to autolysis caused by delayed fixation of cadaver tissues. Here, we introduce an autopsy virtual staining technique to digitally convert autofluorescence images of unlabeled autopsy tissue sections into their hematoxylin and eosin (H&E) stained counterparts through a trained neural network. This technique was demonstrated to effectively mitigate autolysis-induced artifacts inherent in histochemical staining, such as weak nuclear contrast and color fading in the cytoplasmic-extracellular matrix. As a rapid, reagent-efficient, and high-quality histological staining approach, the presented technique holds great potential for widespread application in the future.
We introduce the enhanced Fourier Imager Network (eFIN), an end-to-end deep neural network that synergistically integrates physics-based propagation models with data-driven learning for highly generalizable hologram reconstruction. eFIN overcomes a key limitation of existing methods by performing seamless autofocusing across a large axial range without requiring a priori knowledge of sample-to-sensor distances. Moreover, eFIN incorporates a physics-informed sub-network that accurately infers unknown axial distances through an innovative loss function. eFIN can also achieve a three-fold pixel super-resolution, increasing the space-bandwidth product by nine-fold and enabling substantial acceleration of image acquisition and processing workflows with a negligible performance penalty.
We present a fast virtual-staining framework for defocused autofluorescence images of unlabeled tissue, matching the performance of standard virtual-staining models using in-focus label-free images. For this, we introduced a virtual-autofocusing network to digitally refocus the defocused images. Subsequently, these refocused images were transformed into virtually-stained H&E images using a successive neural network. Using coarsely-focused autofluorescence images, with 4-fold fewer focus points and 2-fold lower focusing precision, we achieved equivalent virtual-staining performance to standard H&E virtual-staining networks that utilize finely-focused images, helping us decrease the total image acquisition time by ~32% and the autofocusing time by ~89% for each whole-slide image.
KEYWORDS: Holography, Physics, Machine learning, Education and training, Deep learning, Data modeling, Biological samples, Biological imaging, Statistical modeling, Imaging systems
We present GedankenNet, a self-supervised learning framework designed to eliminate reliance on experimental training data for holographic image reconstruction and phase retrieval. Analogous to thought (Gedanken) experiments in physics, the training of GedankenNet is guided by the consistency of physical laws governing holography without any experimental data or prior knowledge regarding the samples. When blindly tested on experimental data of various biological samples, GedankenNet performed very well and outperformed existing supervised models on external generalization. We further showed the robustness of GedankenNet to perturbations in the imaging hardware, including unknown changes in the imaging distance, pixel size and illumination wavelength.
We demonstrate a simple yet highly effective uncertainty quantification method for neural networks solving inverse imaging problems. We built forward-backward cycles utilizing the physical forward model and the trained network, derived the relationship of cycle consistency with respect to the robustness, uncertainty and bias of network inference, and obtained uncertainty estimators through regression analysis. An XGBoost classifier based on the uncertainty estimators was trained for out-of-distribution detection using artificial noise-injected images, and it successfully generalized to unseen real-world distribution shifts. Our method was validated on out-of-distribution detection in image deblurring and image super-resolution tasks, outperforming other deep neural network-based models.
Fluorescence lifetime imaging microscopy (FLIM) measures fluorescence lifetimes of fluorescent probes to investigate molecular interactions. However, conventional FLIM systems often requires extensive scanning that is time-consuming. To address this challenge, we developed a novel computational imaging technique called light field tomographic FLIM (LIFT-FLIM). Our approach acquires volumetric fluorescence lifetime images in a highly data-efficient manner, significantly reducing the number of scanning steps. We demonstrated LIFT-FLIM using a single-photon avalanche diode array on various biological systems. Additionally, we expanded to spectral FLIM and demonstrated high-content multiplexed imaging of lung organoids. LIFT-FLIM can open new avenues in the biomedical research.
We introduce GedankenNet, a self-supervised learning model for hologram reconstruction. During its training, GedankenNet leveraged a physics-consistency loss informed by the physical forward model of the imaging process, and simulated holograms generated from artificial random images with no correspondence to real-world samples. After this experimental-data-free training based on “Gedanken Experiments”, GedankenNet successfully generalized to experimental holograms on its first exposure to real-world experimental data, reconstructing complex fields of various samples. This self-supervised learning framework based on a physics-consistency loss and Gedanken experiments represents a significant step toward developing generalizable, robust and physics-driven AI models in computational microscopy and imaging.
We demonstrate a deep learning-based framework, called Fourier Imager Network (FIN), which achieves unparalleled generalization in end-to-end phase-recovery and hologram reconstruction. We used Fourier transform modules in FIN architecture, which process the spatial frequencies of the input images in a global receptive field and bring strong regularization and robustness to the hologram reconstruction task. We validated FIN by training it on human lung tissue samples and blindly testing it on human prostate, salivary gland, and Pap smear samples. FIN exhibits superior internal and external generalization compared with existing hologram reconstruction models, also achieving a ~50-fold acceleration in image inference speed.
We report a novel few-shot transfer learning scheme based on a convolutional recurrent neural network architecture, which was used for holographic image reconstruction. Without sacrificing the hologram reconstruction accuracy and quality, this few-shot transfer learning scheme effectively reduced the number of trainable parameters during the transfer learning process by ~90% and improved the convergence speed by 2.5-fold over baseline models. This method can be applied to other deep learning-based computational microscopy and holographic imaging tasks, and facilitates the transfer learning of models to new types of samples with minimal training time and data.
We present a virtual staining framework that can rapidly stain defocused autofluorescence images of label-free tissue, matching the performance of standard virtual staining models that use in-focus unlabeled images. We trained and blindly tested this deep learning-based framework using human lung tissue. Using coarsely-focused autofluorescence images acquired with 4× fewer focus points and 2× lower focusing precision, we achieved equivalent performance to the standard virtual staining that used finely-focused autofluorescence input images. We achieved a ~32% decrease in the total image acquisition time needed for virtual staining of a label-free whole-slide image, alongside a ~89% decrease in the autofocusing time.
Deep learning-based microscopic imaging methods commonly have limited generalization to new types of samples, requiring diverse training image data. Here we report a few-shot transfer learning framework for hologram reconstruction that can rapidly generalize to new types of samples using only small amounts of training data. The effectiveness of this method was validated on small image datasets of prostate and salivary gland tissue sections unseen by the network before. Compared to baseline models trained from scratch, our approach achieved ~2.5-fold convergence speed acceleration, ~20% training time reduction per epoch, and improved image reconstruction quality.
We report a virtual image refocusing framework for fluorescence microscopy, which extends the imaging depth-of-field by ~20-fold and provides improved lateral resolution. This method utilizes point-spread function (PSF) engineering and a cascaded convolutional neural network model, which we termed as W-Net. We tested this W-Net architecture by imaging 50 nm fluorescent nanobeads at various defocus distances using a double-helix PSF, demonstrating ~20-fold improvement in image depth-of-field over conventional wide-field microscopy. W-Net architecture can be used to develop deep-learning-based image reconstruction and computational microscopy techniques that utilize engineered PSFs and can significantly improve the spatial resolution and throughput of fluorescence microscopy.
Holographic imaging plays an essential role in label-free microscopy techniques, and the retrieval of the phase information of a specimen is vital for image reconstruction in holography. Here, we demonstrate recurrent neural network (RNN) based holographic imaging methods that simultaneously perform autofocusing and holographic image reconstruction from multiple holograms captured at different sample-to-sensor distances. The acquired input holograms are individually back propagated to a common axial plane without any phase retrieval, and then fed into a trained RNN which successfully reveals phase-retrieved and auto-focused images of the unknown samples at its output. As an alternative design, we also employed a dilated convolution in our RNN design to demonstrate an end-to-end phase recovery and autofocusing framework without the need for an initial back-propagation step. The efficacy of these RNN-based hologram reconstruction methods was blindly demonstrated using human lung tissue sections and Papanicolaou (Pap) smears. These methods constitute the first demonstration of the use of RNNs for holographic imaging and phase recovery, and would find applications in label-free microscopy and sensing, among other fields.
We report a deep learning-based virtual image refocusing method that utilizes double-helix point-spread-function (DH-PSF) engineering and a cascaded neural network model, termed W-Net. This method can virtually refocus a defocused fluorescence image onto an arbitrary axial plane within the sample volume, enhancing the imaging depth-of-field and lateral resolution at the same time. We demonstrated the efficacy of our method by imaging fluorescent nano-beads at various defocus distances, and also quantified the nano-particle localization performance achieved with our virtually-refocused images, demonstrating ~20-fold improvement in image depth-of-field over wide-field microscopy, enabled by the combination of DH-PSF and W-Net inference.
We report a recurrent neural network (RNN)-based cross-modality image inference framework, termed Recurrent-MZ+, that explicitly incorporates two or three 2D fluorescence images, acquired at different axial planes, to rapidly reconstruct fluorescence images at arbitrary axial positions within the sample volume, matching the 3D image of the same sample acquired with a confocal scanning microscope. We demonstrated the efficacy of Recurrent-MZ+ on transgenic C. Elegans samples; using 3 wide-field fluorescence images as input, the reconstructed sample volume by Recurrent-MZ+ mitigates the deformations caused by the anisotropic point-spread-function of wide-field microscopy, and matches the ground truth confocal image stack of the sample.
We report a deep learning-based volumetric imaging framework that uses sparse 2D-scans captured by standard wide-field fluorescence microscopy at arbitrary axial positions within the sample. Through the design of a recurrent neural network, the information from different input planes is blended, and virtually propagated in space to rapidly reconstruct the sample volume over an extended axial range. We validated this deep-learning-based volumetric imaging framework using C. Elegans and nanobead samples to demonstrate a 30-fold reduction in the number of required scans. This versatile and rapid volumetric imaging technique reduces the photon dose on the sample and improves the temporal resolution.
We demonstrate a deep learning-based offline autofocusing method, termed Deep-R, to rapidly and blindly autofocus a single-shot microscopy image captured at an arbitrary out-of-focus plane. Deep-R is experimentally validated using various tissue sections that were imaged with fluorescence and brightfield microscopes. Furthermore, snapshot autofocusing under different defocusing scenarios is demonstrated, including uniform axial-defocusing, sample tilting, cylindrical and spherical distortions within the field-of-view. Compared with other online autofocusing algorithms, Deep-R is significantly faster while having comparable image performance. Deep-R framework will enable high-throughput microscopic imaging over large fields-of-view, improving the overall imaging throughput, also reducing the photon dose on the sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.