Deep learning has transformed computational imaging, but traditional pixel-based representations limit their ability to capture continuous multiscale object features. Addressing this gap, we introduce a local conditional neural field (LCNF) framework, which leverages a continuous neural representation to provide flexible object representations. LCNF’s unique capabilities are demonstrated in solving the highly ill-posed phase retrieval problem of multiplexed Fourier ptychographic microscopy. Our network, termed neural phase retrieval (NeuPh), enables continuous-domain resolution-enhanced phase reconstruction, offering scalability, robustness, accuracy, and generalizability that outperform existing methods. NeuPh integrates a local conditional neural representation and a coordinate-based training strategy. We show that NeuPh can accurately reconstruct high-resolution phase images from low-resolution intensity measurements. Furthermore, NeuPh consistently applies continuous object priors and effectively eliminates various phase artifacts, demonstrating robustness even when trained on imperfect datasets. Moreover, NeuPh improves accuracy and generalization compared with existing deep learning models. We further investigate a hybrid training strategy combining both experimental and simulated datasets, elucidating the impact of domain shift between experiment and simulation. Our work underscores the potential of the LCNF framework in solving complex large-scale inverse problems, opening up new possibilities for deep-learning-based imaging techniques.
Traditional fluorescence microscopy with conventional optics suffers from the trade-off between the resolution, field-of-view (FOV) and miniaturization. Computational imaging techniques overcome these limitations by leveraging miniature optics and enabling strong multiplexing. However, the shift-variant degradation caused by miniaturized lenses poses computational and memory challenges. In this work, we developed a Multi-channel FourierNet that learns the global shift variant filters in the frequency domain without any prior knowledge, providing consistent performance on a large-scale FOV. Additionally, we validate the effectiveness of our network by visualizing the correspondence between the saliency map and the truncated PSFs from different viewpoints. We demonstrate the network fueled by simulation data can perform real-time reconstruction on biological samples. We believe this innovative approach holds great promise for advancing computational imaging techniques across diverse applications.
HiLo microscopy is a widefield optical sectioning technique that involves computational reconstruction from two images, one with structured illumination and the other with uniform illumination. A variety of methods, including speckle and periodic grids, can be employed to achieve structured illumination. In this study, we introduce a novel HiLo strategy that utilizes an off-the-shelf holographic diffuser and a low-coherence LED source to generate random caustic patterns. This method offers several benefits over existing ones, such as simplicity and cost-effectiveness. We achieve 4.5 µm optical sectioning capability with a 20x 0.75 NA objective and demonstrate the performance of our method by imaging a 400 µm thick, highly scattering brain section. We anticipate that our caustic-based structured illumination approach will augment the versatility of HiLo microscopy and extend to various imaging applications.
KEYWORDS: 3D modeling, Data modeling, Stereoscopy, Point spread functions, Resolution enhancement technologies, Particles, Optical design, Microlens, Luminescence, Imaging systems
Computational Miniature Mesoscope (CM2) is a novel fluorescence imaging device that achieves single-shot 3D imaging on a compact platform by jointly designing the optics and algorithm. However, the low axial resolution and heavy computational cost hinder its biomedical applications. Here, we demonstrate a deep learning framework, termed CM2Net, to perform fast and reliable 3D reconstruction. Specifically, the multi-stage CM2Net is trained on synthetic data with realistic field varying aberrations based on a 3D linear shift variant model. We experimentally demonstrate that the CM2Net can provide 10x improvement in axial resolution and 1400x faster reconstruction speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.