SignificanceFluorescence lifetime imaging microscopy (FLIM) of the metabolic co-enzyme nicotinamide adenine dinucleotide (phosphate) [NAD(P)H] is a popular method to monitor single-cell metabolism within unperturbed, living 3D systems. However, FLIM of NAD(P)H has not been performed in a light-sheet geometry, which is advantageous for rapid imaging of cells within live 3D samples.AimWe aim to design, validate, and demonstrate a proof-of-concept light-sheet system for NAD(P)H FLIM.ApproachA single-photon avalanche diode camera was integrated into a light-sheet microscope to achieve optical sectioning and limit out-of-focus contributions for NAD(P)H FLIM of single cells.ResultsAn NAD(P)H light-sheet FLIM system was built and validated with fluorescence lifetime standards and with time-course imaging of metabolic perturbations in pancreas cancer cells with 10 s integration times. NAD(P)H light-sheet FLIM in vivo was demonstrated with live neutrophil imaging in a larval zebrafish tail wound also with 10 s integration times. Finally, the theoretical and practical imaging speeds for NAD(P)H FLIM were compared across laser scanning and light-sheet geometries, indicating a 30 × to 6 × acquisition speed advantage for the light sheet compared to the laser scanning geometry.ConclusionsFLIM of NAD(P)H is feasible in a light-sheet geometry and is attractive for 3D live cell imaging applications, such as monitoring immune cell metabolism and migration within an organism.
Single photon counting avalanche diodes (SPADs) are versatile sensors for active and time-correlated measurements such as ranging and fluorescence imaging. These detectors also have great potential for passive or uncorrelated imaging. Recently, it was demonstrated that passive imaging of photon flux is possible by determining the mean photon arrival time. For ambient light illumination, timestamp data can be interpreted as a metric for the photon impingement rate. Various applications have been investigated including high-dynamic-range imaging, single-photon imaging, and capture of fast-moving objects or dynamic scenes. However, the appearance of noise and motion blur requires sophisticated signal processing that enables sub-pixel resolution imaging and reconstruction of the scene by motion compensation. In this paper, we present new results on the evaluation of global scene motion. In our approach, motion is intentionally generated by a rotating wedge prism, resulting in continuous global motion on a circular path. We have studied scenes with different optical contrast.
SPAD array sensors support higher-throughput fluorescence lifetime imaging microscopy (FLIM) by transitioning from laser-scanning to wide-field geometries. While a SPAD camera in epi-fluorescence geometry enables wide-field FLIM of fluorescently labeled samples, label-free imaging of single-cell autofluorescence is not feasible in an epi-fluorescence geometry because background fluorescence from out-of-focus culture medium masks cell autofluorescence and biases lifetime measurements. Here, we address this problem in a proof-of-concept implementation by integrating the SPAD camera in a light-sheet illumination geometry to achieve optical sectioning and limit out-of-focus contributions, enabling label-free wide-field FLIM of single-cell NAD(P)H autofluorescence.
KEYWORDS: Super resolution, Denoising, Sensors, Cameras, Reconstruction algorithms, Signal to noise ratio, Fluctuations and noise, Photodetectors, Video, Single photon
Single-photon sensitive image sensors have recently gained popularity in passive imaging applications where the goal is to capture photon flux (brightness) values of different scene points in the presence of challenging lighting conditions and scene motion. Recent work has shown that high-speed bursts of single-photon timestamp information captured using a single-photon avalanche diode camera can be used to estimate and correct for scene motion thereby improving signal-to-noise ratio and reducing motion blur artifacts. We perform a comparison of various design choices in the processing pipeline used for noise reduction, motion compensation, and upsampling of single-photon timestamp frames. We consider various pixelwise noise reduction techniques in combination with state-of-the-art deep neural network upscaling algorithms to super-resolve intensity images formed with single-photon timestamp data. We explore the trade space of motion blur and signal noise in various scenes with different motion content. Using real data captured with a hardware prototype, we achieved super-resolution reconstruction at frame rates up to 65.8 kHz (native sampling rate of the sensor) and captured videos of fast-moving objects. The best reconstruction is obtained with the motion compensation approach, which achieves a structural similarity (SSIM) of about 0.67 for fast-moving rigid objects. We are able to reconstruct subpixel resolution. These results show the relative superiority of our motion compensation compared to other approaches that do not exceed an SSIM of 0.5.
KEYWORDS: Single photon, Sensors, Super resolution, Nonlinear filtering, Cameras, Digital filtering, Image quality, Linear filtering, Fluctuations and noise, Electronic filtering
Single photon-counting avalanche photo-diode (SPAD) can measure the photon flux from uncorrelated single photons. In present work, we show how the sensor photon count rate is related to the intensity or the radiant flux that is reflected from surfaces in the sensor's field of view and incident on the sensor array. After a brief theoretical discussion of photon flux imaging, we examine various de-noising strategies and the effect of motion blur. Finally, we present the application of a fast super-resolution neural network (FSRCNN) to scale image by a scaling factor of 3× to obtain super-resolution images (32 × 32 → 96 × 96).
Non-Line-of-Sight (NLOS) imaging uses fast illumination and detection
to reconstruct images of scenes from indirect illumination. Light
reflected off a relay surface is thereby used to view the scene. One
approach to compute these reconstructions from the captured data is to
transform them into line of sight wave propagation problems creating a
virtual wave front to model a virtual camera at the location of the
relay surface. Our NLOS imaging system samples the virtual wavefront
at the virtual aperture. As in a line of sight camera, scene
reconstruction for the virtual camera is achieved by propagating the
virtual wave back into the scene. In line of sight cameras, this
operation is often performed by a lens. For the virtual camera, we
implement it computationally. This approach allows us to transfer
methods for scene reconstruction, scene inference, and imaging from
existing line of sight imaging approaches to NLOS imaging.
In particular, we make use of fast wave propagation algorithms to
create high speed memory efficient NLOS imaging. This allows us to
reconstruct complex scenes in sub-second times for variable hardware
configurations. In particular, our reconstruction methods allow us to
use SPAD arrays in conjunction with laser scanning to improve capture
speed.
There is a large diversity of line of sight imaging approaches with
different properties that probe different aspects of the scene. In
principle, the Phasor Field Virtual Wave formalism allows us to turn
any of them into a NLOS virtual camera. I will cover several examples
of this process that yield different NLOS reconstructions, including
2D NLOS imaging, transient NLOS videos, and Visualization of higher
order light paths from 4th and 5th bounces in the hidden scene.
Single-photon sensor technology is rapidly emerging as the optical sensor technology of choice in specialized low flux imaging applications such as long-range LiDAR, fluorescence microscopy and non-line-of-sight imaging. We ask the question: Can single-photon sensors be used more broadly as general-purpose image sensors for passive 2D intensity imaging? We derive a photon flux estimator using the number of photons detected in a fixed exposure time by a dead-time-limited single-photon avalanche diode (SPAD) sensor. Unlike a conventional image sensor pixel that has a hard saturation limit due to its full well capacity, our SPAD-based passive imaging method has a non-linear response that never saturates. This enables SPADs to operate not only at extremely low photon flux levels but also at extremely high flux levels, several orders of magnitude higher than the saturation limit of conventional image sensors. We present a comprehensive theoretical analysis of the effect of various design parameters on the signal-to-noise-ratio and dynamic range of a passively operated SPAD pixel, and also demonstrate the dynamic range improvement experimentally.
The excited state lifetime of a fluorophore together with its fluorescence emission spectrum provide information that can yield valuable insights into the nature of a fluorophore and its microenvironment. However, it is difficult to obtain both channels of information in a conventional scheme as detectors are typically configured either for spectral or lifetime detection. We present a fiber-based method to obtain spectral information from a multiphoton fluorescence lifetime imaging (FLIM) system. This is made possible using the time delay introduced in the fluorescence emission path by a dispersive optical fiber coupled to a detector operating in time-correlated single-photon counting mode. This add-on spectral implementation requires only a few simple modifications to any existing FLIM system and is considerably more cost-efficient compared to currently available spectral detectors.
Standard imaging systems, such as cameras, radars and lidars, are becoming a big part of our everyday life when it comes to detection, tracking and recognition of targets that are in the direct line-of-sight (LOS) of the imaging system. Challenges however start to arise when the objects are not in the system’s LOS, typically when an occluder is obstructing the imager’s field of view. This is known as non-line-of-sight (NLOS) and it is approached in different ways depending on the imager’s operating wavelength. We consider an optical imaging system and the literature offers different approaches from a component and recovery algorithm point of view.
In our optical setup, we assume a system comprising an ultra-fast laser and a single photon avalanche diode (SPAD). The former is used to sequentially illuminate different points on a diffuser (relay) wall, causing the photons to uniformly scatter in all directions, including the target’s location. The latter component collects the scattered photons as a function of time. In post-processing, back-projection based algorithms are employed to recover the target’s image. Recent publications focused their attention on showing the quality of the results, as well as potential algorithm improvements. Here we show results based on a novel theoretical approach (coined as “phasor fields”), which suggests treating the NLOS imaging problem as a LOS one. The key feature is to consider the relay wall as a virtual sensor, created by the different points illuminated on the wall. Results show the superiority of this method compared to standard approaches.
The depth resolution achieved by a continuous wave time-of-flight (C-ToF) imaging system is determined by the coding (modulation and demodulation) functions that it uses. We present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. The proposed Hamiltonian coding functions achieve up to an order of magnitude higher resolution as compared to the current state-of-the-art. Using simulations and a hardware prototype, we demonstrate the performance advantages of Hamiltonian coding in a wide range of imaging settings.
Ranging and imaging in low light level conditions are a key application of active imaging systems. Typically, intensified cameras (ICCD, EBCMOS) are used to sense the intensity of reflected laser light pulses used for illumination. Recent developments in single photon avalanche diodes (SPAD) show, that sensors having single photon counting capabilities are about to revolutionize low light level imaging and laser ranging. These sensors have the ability to count detection events caused by single photons with very high timing precision. By application of statistical measurement means, the sensitivity of such devices can be increased far beyond classical sensing devices and the needed photon flux has significant lower intensities. New SPAD devices enable the development of novel sensing methods and technologies, and open laser ranging and imaging to new fields of application. Here, we focus on novel hardware structures which are under development as well as the application of avalanche photo diode detectors for light-in-flight detection and non-line-of-sight imaging.
In an optical Line-of-Sight (LOS) scenario, such as one involving a LIDAR system, the goal is to recover an image of a target in the direct path of the transmitter and receiver. In Non-Line-of-Sight (NLOS) scenarios the target is hidden from both the transmitter and the receiver by an occluder, i.e. a wall. Recent advancements in technology, computer vision and inverse light transport theory have shown that it is possible to recover an image of a hidden target by exploiting the temporal information encoded in multiple-scattered photons. The core idea is to acquire data using an optical system, composed of an ultra-fast laser that emits short pulses (in the order of femtoseconds) and a camera capable of recovering the photons time-of-flight information (a typical resolution is in the order of picoseconds). We reconstruct 3D images from this data based on the backprojection algorithm, a method typically found in the computational tomography field, which is parallelizable and memory efficient, although it only provides an approximate solution. Here we present improved backprojection algorithms for applications to large scale scenes with with a large number of scatterers and meters to hundreds of meters diameter. We apply these methods to the NLOS imaging of rooms and lunar caves.
Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function.
We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.
Light scattering is a primary obstacle to imaging in many environments. On small scales in biomedical microscopy and diffuse tomography scenarios scattering is caused by tissue. On larger scales scattering from dust and fog provide challenges to vision systems for self driving cars and naval remote imaging systems. We are developing scale models for scattering environments and investigation methods for improved imaging particularly using time of flight transient information.
With the emergence of Single Photon Avalanche Diode detectors and fast semiconductor lasers, illumination and capture on picosecond timescales are becoming possible in inexpensive, compact, and robust devices. This opens up opportunities for new computational imaging techniques that make use of photon time of flight.
Time of flight or range information is used in remote imaging scenarios in gated viewing and in biomedical imaging in time resolved diffuse tomography. In addition spatial filtering is popular in biomedical scenarios with structured illumination and confocal microscopy. We are presenting a combination analytical, computational, and experimental models that allow us develop and test imaging methods across scattering scenarios and scales. This framework will be used for proof of concept experiments to evaluate new computational imaging methods.
The application of nonline-of-sight (NLoS) vision and seeing around a corner has been demonstrated in the recent past on a laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the sensor’s direct field of view. A detailed knowledge about the scattering surfaces is necessary for the analysis. The authors evaluate the realization of dual-mode concepts with the aim of collecting all necessary information to enable both the direct three-dimensional imaging of a scene as well as the indirect sensing on hidden objects. Two different sensing approaches, laser gated viewing (LGV) and time-correlated single-photon counting, are investigated operating at laser wavelengths of 532 and 1545 nm, respectively. While LGV sensors have high spatial resolution, their application for NLoS sensing suffers from a low temporal resolution, i.e., a minimal gate width of 2 ns. On the other hand, Geiger-mode single-photon counting devices have high temporal resolution (250 ps), but the array size is limited to some thousand sensor elements. The authors present detailed theoretical and experimental evaluations of both sensing approaches.
The application of non-line of sight vision and see around a corner has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensors field of view. Recent demonstrator systems were driven at laser wavelengths (800 nm and 532 nm) which are far from the eye-safe shortwave infrared (SWIR) wavelength band i.e. between 1.4 μm and 2 μm. Therefore, the application in public or inhabited areas is difficult with respect to international laser safety conventions. In the present work, the authors evaluate the application of recent eye safe laser sources and sensor devices for non-line of sight sensing and give predictions on range and resolution. Further, the realization of a dual mode concept is studied enabling both, the direct view on a scene and the indirect view on a hidden scene. While recent laser gated viewing sensors have high spatial resolution, their application in non-line of sight imaging suffer from a too low temporal resolution due to minimal sensor gate width of around 150 ns. On the other hand, Geiger-mode single photon counting devices have high temporal resolution, but their spatial resolution is (until now) limited to array sizes of some thousand sensor elements. In this publication the authors present detailed theoretical and experimental evaluations.
The application of non-line-of-sight vision has been demonstrated in the recent past on laboratory level with round trip path lengths on the scale of 1 m as well as 10 m. This method uses a computational imaging approach to analyze the scattered information of objects which are hidden from the direct sensor field of view. In the present work, the authors evaluate the application of recent single photon counting devices for non-line-of-sight sensing and give predictions on range and resolution. Further, the realization of a concept is studied enabling the indirect view on a hidden scene. Different approaches based on ICCD and GM-APD or SPAD sensor technologies are reviewed. Recent laser gated viewing sensors have a minimal temporal resolution of around 2 ns due to sensor gate widths. Single photon counting devices have higher sensitivity and higher temporal resolution.
We discuss new approaches to analyze laser-gated viewing data for nonline-of-sight vision with a frame-to-frame back-projection as well as feature selection algorithms. Although first back-projection approaches use time transients for each pixel, our method has the ability to calculate the projection of imaging data on the voxel space for each frame. Further, different data analysis algorithms and their sequential application were studied with the aim of identifying and selecting signals from different target positions. A slight modification of commonly used filters leads to a powerful selection of local maximum values. It is demonstrated that the choice of the filter has an impact on the selectivity i.e., multiple target detection as well as on the localization precision.
In the present paper, we discuss new approaches to analyze laser gated viewing data for non-line-of-sight vision with a novel frame-to-frame back projection as well as feature selection algorithms. While first back projection approaches use time transients for each pixel, our new method has the ability to calculate the projection of imaging data on the obscured voxel space for each frame. Further, four different data analysis algorithms were studied with the aim to identify and select signals from different target positions. A slight modification of commonly used filters leads to powerful selection of local maximum values. It is demonstrated that the choice of the filter has impact on the selectivity i.e. multiple target detection as well as on the localization precision.
Endoscope cameras play an important and growing role as a diagnostic and surgical tool. The endoscope camera is
usually used to provide a view of the scene straight ahead of the instrument to the operator. As is common in many
remotely operated systems, the limited field of view and the inability to pan the camera make it challenging to gain a
situational awareness comparable to an operator with direct access to the scene. We present a spectral multiplexing
technique for endoscopes that allows for overlay of the existing forward view with additional views at different angles to
increase the effective field of view of the device. Our goal is to provide peripheral vision while minimally affecting the
design and forward image quality of existing systems.
Laser gated viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied for vision through fog, smoke, and other degraded environmental conditions as well as for the vision through sea water in submarine operation. A direct imaging of nonscattered photons (or ballistic photons) is limited in range and performance by the free optical path length, i.e., the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a nonline-of-sight imaging. The spatial and temporal distributions of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In particular, the information outside the line of sight or outside the visibility range is of high interest. We demonstrate nonline-of-sight imaging with a laser gated viewing system and different illumination concepts (point and surface scattering sources).
Laser Gated Viewing is a prominent sensing technology for optical imaging in harsh environments and can be applied to the vision through fog, smoke and other degraded environmental conditions as well as to the vision through sea water in submarine operation. A direct imaging of non-scattered photons (or ballistic photons) is limited in range and performance by the free optical path length i.e. the length in which a photon can propagate without interaction with scattering particles or object surfaces. The imaging and analysis of scattered photons can overcome these classical limitations and it is possible to realize a non-line-of-sight imaging. The spatial and temporal distribution of scattered photons can be analyzed by means of computational optics and their information of the scenario can be restored. In the case of Lambertian scattering sources the scattered photons carry information of the complete environment. Especial the information outside the line of sight or outside the visibility range is of high interest. Here, we discuss approaches for non line of sight active imaging with different indirect and direct illumination concepts (point, surface and volume scattering sources).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.