KEYWORDS: Biological imaging, Modulation, Design, Super resolution, Neural networks, Optical components, Diffraction limit, Near field optics, Chemical elements, Reflection
Optical superoscillation enables far-field superresolution imaging beyond diffraction limits. However, existing superoscillatory lenses for spatial superresolution imaging systems still confront critical performance limitations due to the lack of advanced design methods and limited design degree of freedom. Here, we propose an optical superoscillatory diffractive neural network (SODNN) that achieves spatial superresolution for imaging beyond the diffraction limit with superior optical performance. SODNN is constructed by utilizing diffractive layers for optical interconnections and imaging samples or biological sensors for nonlinearity. This modulates the incident optical field to create optical superoscillation effects in three-dimensional (3D) space and generate the superresolved focal spots. By optimizing diffractive layers with 3D optical field constraints under an incident wavelength size of λ, we achieved a superoscillatory optical spot and needle with a full width at half-maximum of 0.407λ at the far-field distance over 400λ without sidelobes over the field of view and with a long depth of field over 10λ. Furthermore, the SODNN implements a multiwavelength and multifocus spot array that effectively avoids chromatic aberrations, achieving comprehensive performance improvement that surpasses the trade-off among performance indicators of conventional superoscillatory lens design methods. Our research work will inspire the development of intelligent optical instruments to facilitate the applications of imaging, sensing, perception, etc.
Neural networks and other advanced image processing algorithms excel in a wide variety of computer vision and imaging applications, but their high performance also comes at a high computational cost and their success is sometimes limited. Here, we explore hybrid optical-digital strategies to computational imaging that outsource parts of the algorithm into the optical domain. Using such a co-design of optics and image processing, we can learn application-domain-specific cameras using modern artificial intelligence techniques or compute parts of a convolutional neural network in optics. Optical computing happens at the speed of light and without any memory or power requirements, thereby opening new directions for intelligent imaging systems.
Holographic near-eye displays are a promising technology to provide realistic and visually comfortable imagery with improved user experience, but their coherent light sources limit the image quality and restrict the types of patterns that can be generated. A partially-coherent mode, supported by emerging fast spatial light modulators (SLMs), has potential to overcome these limitations. However, these SLMs often have a limited phase control precision, which current computer-generated holography (CGH) techniques are not equipped to handle. In this work, we present a flexible CGH framework for fast, highly-quantized SLMs. This framework is capable of incorporating a wide range of content, including 2D and 2.5D RGBD images, 3D focal stacks, and 4D light fields, and we demonstrate its effectiveness through state-of-the-art simulation and experimental results.
Holographic near-eye displays have the potential to overcome many long-standing challenges for virtual and augmented reality (VR/AR) systems; they can reproduce full 3D depth cues, improve power efficiency, enable compact display systems, and correct for optical aberrations. Despite these remarkable benefits, this technology has been held back from widespread usage due to the limited image quality achieved by traditional holographic displays, the slow algorithms for computer-generated holography (CGH), and current bulky optical setups. Here, we review recent advances in CGH that utilize artificial intelligence (AI) techniques to solve these challenges.
Holographic displays have recently shown remarkable progress in the research field. However, images reconstructed from existing display systems using phase-only spatial light modulators (SLMs) are with noticeable speckles and low contrast due to the non-trivial diffraction efficiency loss. In this work, we investigate a novel holographic display architecture that uses two phase-only SLMs to enable high-quality, contrast-enhanced dis- play experiences. Our system builds on emerging camera-in-the-loop optimization techniques that capture both diffracted and undiffracted light on the image plane with a camera and use this to update the hologram patterns on the SLMs in an iterative fashion. Our experimental results demonstrate that the proposed display architecture can deliver higher-contrast and holographic images with little speckle without the need for extra optical filtering.
Single-photon detectors time-stamp incident photon events with picosecond accuracy. When combined with pulsed light sources, these emerging detectors record transient measurements of a scene containing the time of flight information of the direct light reflecting off of visible objects, and also the indirectly scattered light from objects outside the line of sight. The latter information has recently been demonstrated to enable non-line-of-sight (NLOS) imaging, where advanced inverse methods process time-resolved indirect light transport of a scene to estimate the 3D shape of objects hidden around corners. In this article, we review computationally efficient NLOS approaches that build on confocally scanned data, where the light pulses used to probe a scene are optically aligned with the detection path. This specific scanning procedure has given rise to computationally efficient inverse methods that enable real-time NLOS image reconstruction.
Purpose: Repetitive Transcranial Magnetic Stimulation (rTMS) is an important treatment option for medication resistant depression. It uses an electromagnetic coil that needs to be positioned accurately at a specific location and angle next to the head such that specific brain areas are stimulated. Existing image-guided neuronavigation systems allow accurate targeting but add cost, training and setup times, preventing their wide-spread use in the clinic. Mixed-reality neuronavigation can help mitigate these issues and thereby enable more widespread use of image-based neuronavigation by providing a much more intuitive and streamlined visualization of the target. A mixed-reality neuronavigation system requires two core functionalities: 1) tracking of the patient's head and 2) visualization of targeting-related information. Here we focus on the head tracking functionality and compare three different head tracking methods for a mixed-reality neuronavigation system. Methods: We integrated three head tracking methods into the mixed reality neuronavigation framework and measured their accuracy. Specifically, we experimented with (a) marker-based tracking with a mixed reality headset (optical see-through head-mounted display (OST-HMD)) camera, (b) marker-based tracking with a world-anchored camera and (c) markerless RGB-depth (RGB-D) tracking with a world-anchored camera. To measure the accuracy of each approach, we measured the distance between real-world and virtual target points on a mannequin head. Results: The mean tracking error for the initial head pose and the head rotated by 10° and 30° for the three methods respectively was: (a) 3.54±1.10 mm, 3.79±1.78 mm and 4.08±1.88 mm, (b) 3.97±1.41 mm, 6.01±2.51 mm and 6.84±3.48 mm, (c) 3.16±2.26 mm, 4.46±2.30 mm and 5.83±3.70 mm. Conclusion: For the initial head pose, all three methods achieved the required accuracy of < 5 mm for TMS treatment. For smaller head rotations of 10°, only the marker-based (a) and markerless method (c) delivered sufficient accuracy for TMS treatment. For larger head rotations of 30°, only the marker-based method (a) achieved sufficient accuracy. While the markerless method (c) did not provide sufficient accuracy for TMS at the larger head rotations, it offers significant advantages such as occlusion-handling and stability and could potentially meet the accuracy requirements with further methodological refinements.
This paper discusses a method to reconstruct a transparent ow surface from single camera shot with the aid of a Micro-lens array. An intentionally prepared high frequency background which is placed behind the refractive flow is captured and a curl-free optical flow algorithm is applied between pairs of images taken by different micro-lenses. The computed raw optical ow vector is a blend of motion parallax and background deformation vector due to the underlying flow. Subtracting the motion parallax, which is obtained by calibration, from the total op- optical flow vector yields the background deformation vector. The deflection vectors on each images are used to reconstruct the flow profile. A synthetic data set of fuel injection was used to evaluate the accuracy of the proposed algorithm and good agreement was achieved between the test and reconstructed data. Finally, real light field data of hot air created by a lighter flame is used to reconstruct and show a hot air plume surface.
KEYWORDS: LCDs, 3D displays, Polarization, 3D volumetric displays, Spatial resolution, Prototyping, Opacity, Integral imaging, Signal attenuation, Convolution
This paper focuses on resolving long-standing limitations of parallax barriers by applying formal optimization
methods. We consider two generalizations of conventional parallax barriers. First, we consider general two-layer
architectures, supporting high-speed temporal variation with arbitrary opacities on each layer. Second,
we consider general multi-layer architectures containing three or more light-attenuating layers. This line of
research has led to two new attenuation-based displays. The High-Rank 3D (HR3D) display contains a stacked
pair of LCD panels; rather than using heuristically-defined parallax barriers, both layers are jointly-optimized
using low-rank light field factorization, resulting in increased brightness, refresh rate, and battery life for mobile
applications. The Layered 3D display extends this approach to multi-layered displays composed of compact
volumes of light-attenuating material. Such volumetric attenuators recreate a 4D light field when illuminated
by a uniform backlight. We further introduce Polarization Fields as an optically-efficient and computationally efficient
extension of Layered 3D to multi-layer LCDs. Together, these projects reveal new generalizations to
parallax barrier concepts, enabled by the application of formal optimization methods to multi-layer attenuation-based
designs in a manner that uniquely leverages the compressive nature of 3D scenes for display applications.
Holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. Our intention is to combine both technologies to create a powerful tool for science, industry and education. We are currently investigating the possibility of integrating computer generated graphics and holograms. This paper gives an overview over our latest results. It presents several applications of interaction techniques to graphically enhanced holograms and gives a first glance on a novel method that reconstructs depth from optical holograms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.