The imaging saturation can lead to errors in image processing and result in incomplete or inaccurate three-dimensional (3D) outcomes. To address this problem, we propose an adaptive fringe luminance projection method based on iterative luminance adjustments to the pattern regions corresponding to the highlight targets. After 3-4 times iterations to update the intensity distribution of the patterns, we can ensure that the captured images are without overexposed area; thus, the intensity variations provide valid information for the subsequent 3D reconstruction. The experimental results show the accurate and complete 3D reconstruction for pathological sample surfaces, which reaches the requirements for 3D digitalization of pathological samples.
This paper presents a method for optimizing spectral imaging based on incoherent correlation holography. By adjusting the axial position of the PZT, a series of interference patterns with varying optical path differences (OPD) is captured. Due to the limited precision of the PZT, we propose an optimization algorithm capable of accurately retrieving the intensity variation. Following the Fourier transform to the image stack, the spatial wavefront corresponding to different wavelengths can be reconstructed. We demonstrate the effectiveness of our approach through spectral reconstruction under quasimonochromatic illumination, while also highlighting the potential for multi-spectral imaging applications.
KEYWORDS: 3D modeling, Infrared imaging, 3D image processing, Infrared radiation, Infrared cameras, Point clouds, Unmanned aerial vehicles, 3D acquisition, Atomic force microscopy, Cameras
3D reconstruction based on infrared images can restore the 3D model with temperature information, which helps observe the appearance and structure of the target more comprehensively and has potential application in the fields that need temperature detection, such as environmental monitoring and risk detection. This paper studies the 3D reconstruction of the infrared scene based on a UAV platform. An infrared camera captures the images, and the incremental SFM algorithm is used for further 3D reconstruction. The experimental results show that the 3D point cloud from the infrared image is satisfactory, the 3D reconstruction of terrain such as roads, houses, and trees is successful, and the 3D heat distribution of the scene can be displayed.
The Scheimpflug principle is commonly used in single-camera-based and multi-camera-based MFPP systems to extend the mutual overlap range of different views in the object space. We set up a dual-camera-based MFPP system and performed 3D measurements of plates, standard balls, and some specifically designed samples using the phase map stereo matching method. We conclude that the dual-camera-based system wins in measurement accuracy, while the single-camera-based system has better integrity, which may provide a reference for the system design in implementing industrial applications.
Stereo vision plays an essential role in non-contact 3D measurement, which employs two cameras to achieve applications such as visual synthesis, terrain surveying, and deformation detection. The commonly used Scheimpflug principle is expressed as the object plane, the image plane, and the lens plane intersect in a line, based on which stereo cameras can be slantwise focused on the object space with an overlapping field of view and depth of field. Based on our previously proposed calibration method, a stereo-rectification of Scheimpflug telecentric lenses is proposed in this paper. The effectiveness and accuracy of the proposed methods are verified by experiments.
In the lens-based imaging model, the Scheimpflug principle is expressed as the object plane, the image plane, and the lens plane intersect in a line. With this principle, the object surface in front of the lens can be tilted by installing a tilted sensor, thereby significantly extending the axial distribution of the clear-imaging area. In order to calibrate the bi-telecentric lens under the Scheimpflug condition, we derived a concise imaging model, and the corresponding calibration method without solving the rotation or tilt angle is proposed. The re-projection error is calculated in the experiment to verify the effectiveness of our method.
In the long-range imaging system, one of the main factors limiting the imaging resolution is the size of the imaging lens aperture, which determines the diffraction limit of the optical system. In the actual Fourier ptychography imaging system, the system errors such as the aberrations of imaging devices and the noise of the detector will be introduced in the actual imaging, which is also one of the important factors to reduce the quality of the reconstructed image. In order to improve the reconstruction accuracy of the Fourier ptychography imaging algorithm, this paper mainly discusses some optimization algorithms in the process of the Fourier ptychography imaging algorithm to improve the high-resolution details of the restored image. The adaptive step size based optimization algorithm is used to update the spectrum and aperture function of the current sub-aperture to obtain the high-resolution spectrum information of the measured target. The optimal spectrum overlap rate is discussed to reduce the number of image acquisition and calculation cost as much as possible. In the reconstruction process, the simulated annealing algorithm is used to correct the positioning error of the sub-aperture, and the optimization algorithm is used to update the sub-aperture, which greatly improves the accuracy of the reconstruction results and achieves the theoretical imaging resolution.
In the long-distance imaging system, one of the main factors limiting the imaging resolution is the size of the imaging lens aperture, which determines the diffraction limit of the optical system. Therefore, we propose a non-interference synthetic aperture super-resolution imaging reconstruction and optimization method. The camera array is used to collect a series of low-resolution sub-aperture images. Combined with Fourier ptychography imaging algorithm, the spectrum and aperture function of the current sub-aperture diameter is updated by using the optimization algorithm based on adaptive step size. to obtain the high-resolution spectrum information of the target to be measured. Meanwhile, the high-resolution spectrum information of the target is obtained. In the reconstruction process, the simulated annealing algorithm is introduced to correct the positioning error of the sub-aperture, and the optimization algorithm is used to update the sub-aperture, which greatly improves the accuracy of the reconstruction results and achieves the theoretical imaging resolution. Moreover, it also has excellent imaging results for complex objects, which verifies the feasibility of the algorithm.
Convolution neural network has been successfully applied to the super-resolution method of the visible image. In this paper, we propose an infrared image super-resolution imaging algorithm based on auxiliary convolution neural network, which uses the detail information provided by the visible image under low-light conditions for super-resolution imaging of infrared image. In this algorithm, infrared image and visible image are input into the convolution neural network at the same time to obtain high resolution infrared image. The results show that the super-resolution infrared image has more detailed information. Compared with other super-resolution methods, the proposed network can obtain the high super-resolution reconstruction efficiency.
We report a new computational super-resolution (SR) imaging technique, termed as coded aperture super-resolution imaging (CASR), which is to modulate the point spread function (PSF) of the imaging system by rotating the aperture pattern. The pattern is designed in an anisotropic manner so that the PSF spreads across multiple pixels and contains clues about high-frequency structure. A fundamental difference between our approach and conventional multi-image superresolution is that CASR accounts for the diffraction effect explicitly with no need for relative motion between the scene and the detector. With CASR, we design and construct two sets of programmable aperture photoelectric imaging systems in the visible spectrum. The achievable equivalent Nyquist sampling frequency of the detectors is increased to 3.57×. Furthermore, it can be flexibly applied to long-distance HR detection due to its advantages of fast response, no mechanical movement, and anti-airflow disturbance.
As the digital projector develops, fringe projection profilometry has been widely used in the fast 3D measurement. However, the field of view of traditional 3D measurement systems is commonly in decimeters, which limits the 3D reconstruction accuracy to tens of microns. If we want to improve the accuracy further, we have to minimize the field of view and meanwhile increase the fringe density in space. For this purpose, we developed two kinds of systems based on a stereo-microscope and telecentric lenses, respectively. We also studied the corresponding calibration frameworks and developed fast 3D measurement methods with both Fourier transform and phase- shifting algorithms for real-time 3D reconstruction of micro-scale objects.
In fringe projection profilometry, using denser fringes can improve the measurement accuracy. In real-time measurement situations, the number of the fringe pattern is limited to reduce motion-induced errors, which, however, poses more difficulties for the absolute phase recovery from dense fringes. In this paper, we propose a stereo phase matching method that takes advantage of the high-accuracy of denser fringes and the high-efficiency of using only two different frequencies of fringes. The phase map is divided into several sub-areas and in each sub-area, the phase is unwrapped independently. The correct matched pixel is easily selected from the distributed candidates in different sub-area with the help of geometry constraints.
KEYWORDS: 3D modeling, Cameras, 3D acquisition, 3D metrology, Clouds, Image registration, 3D image processing, Imaging systems, Data modeling, Projection systems
Three-dimensional (3D) registration or matching is a crucial step in 3D model reconstruction. In this work, we develop a real-time 3D point cloud registration technology. Firstly, in order to achieve real-time 3D data acquisition, the stereo phase unwrapping method is utilized to eliminate the ambiguity of the wrapped phase, assisted with the depth constraint strategy without projecting any additional patterns or embedding any auxiliary signals. Then we implement SLAM-based coarse registration and ICP-based fine registration to match the point cloud data after the rapid identification of two-dimensional (2D) feature points. In order to improve the efficiency of 3D registration, the relative motion of the measured object at each coarse registration is quantified, through which only one fine registration is performed after several coarse registrations. The experiment shows that, the complex model can be registered in real time to reconstruct its whole 3D model with our method.
Phase-shifting profilometry (PSP) is a widely used approach to high-accuracy three-dimensional shape measurements. However, when it comes to moving objects, phase errors induced by the movement often result in severe artifacts even though a high-speed camera is in use. From our observations, there are three kinds of motion artifacts: motion ripples, motion-induced phase unwrapping errors, and motion outliers. We present a novel motion-compensated PSP to remove the artifacts for dynamic measurements of rigid objects. The phase error of motion ripples is analyzed for the phaseshifting algorithm and is compensated using the statistical nature of the fringes. The phase unwrapping errors are corrected exploiting adjacent reliable pixels, and the outliers are removed by comparing the original phase map with a smoothed phase map. Compared with the three-step PSP, our method can improve the accuracy significantly for objects in motion
KEYWORDS: Cameras, Projection systems, Imaging systems, 3D metrology, Optical spheres, 3D modeling, Reliability, 3D image reconstruction, Three dimensional sensing, 3D image processing
In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefit from the position-optimized quadcamera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that the resultant dynamic 3D sensing system can realize real-time 3D reconstruction with a depth precision of 50 μm.
A distortion-free telecentric camera dose not have an optical center because of the orthogonal projection. However, the position of optical center should be known when the lens distortion is considered. Since the full-scale parameters are derived through an iterative algorithm, critical initial values of the optical center should be provided to avoid being trapped in local minima. In this paper, we proposed a two-step algorithm to estimate the optical center as the trustworthy initial value for the subsequent iteration process. The first step is directly calculating the pixel coordinates of the lateral distortion center using the extracted control points. The second step is optimizing both lateral and tangential coefficients considering the properties of the affine transformation in the imaging process. The effectiveness of our proposed method is proven by the measurement results using a new developed microscopic telecentric stereovision system.
An improved bi-frequency phase-shifting technique based on a multi-view fringe projection system is proposed, which significantly enhances the measurement precision without compromising the measurement speed. Based on the geometric constraints in a multi-view system, the unwrapped phase of the low-frequency (10-period) fringes can be obtained directly, which serves as a reference to unwrap the high-frequency phase map with a total number of periods of up to 160. Experiments on both static and dynamic scenes are performed, verifying that our method can achieve real-time and high-precision 3-D measurement with a precision of about 50 μm.
Microscopic 3-D shape measurement can supply accurate metrology of the delicacy and complexity of MEMS components of the final devices to ensure their proper performance. Fringe projection profilometry (FPP) has the advantages of noncontactness and high accuracy, making it widely used in 3-D measurement. Recently, tremendous advance of electronics development promotes 3-D measurements to be more accurate and faster. However, research about real-time microscopic 3-D measurement is still rarely reported. In this work, we effectively combine optimized binary structured pattern with number-theoretical phase unwrapping algorithm to realize real-time 3-D shape measurement. A slight defocusing of our proposed binary patterns can considerably alleviate the measurement error based on phase-shifting FPP, making the binary patterns have the comparable performance with ideal sinusoidal patterns. Real-time 3-D measurement about 120 frames per second (FPS) is achieved, and experimental result of a vibrating earphone is presented.
We propose an absolute 3D micro surface profile measurement technique based on a Greenough-type stereomicroscope. The camera and the projector are fixed on the stereomicroscope, facilitating a flexible 3D measurement of objects with different heights. Experiments of both calibration and measurements are conducted, and the results show that our proposed method works well for measuring different types of geometry like spheres, ramps and planes etc. The reconstruction accuracy can achieve 4.8 μm with a measurement depth about 3 mm.
We introduce a high-speed 3-D shape measurement technique based on composite phase-shifting fringes and a stereo camera system. Epipolar constraint is adopted to search the corresponding point independently without additional images. Meanwhile, by analysing the 3-D position and the main wrapped phase of the corresponding point, pairs with an incorrect 3-D position or considerable phase difference are effectively rejected. Then all the qualified corresponding points are corrected, and the unique one as well as the related period order is selected through the embedded triangular wave. Finally, considering that some points can only be captured by a single camera in some shading areas, the final period order of these points in one camera and the one of their corresponding points in another camera always have different values, so left-right consistency check is employed to eliminate those erroneous period orders in this case. Several experiments on both static and dynamic scenes are performed, verifying that our method can achieve a speed of 120 frames per second (fps) with 25-period fringe patterns for fast, dense, and accurate 3-D measurement.
KEYWORDS: Digital holography, Holography, Charge-coupled devices, Holograms, Microscopes, Beam expanders, Digital recording, Digital filtering, Optical signal processing, Beam splitters
We design a holographic system which is lensless and compact. There is a beam expander in conventional holographic setup to produce parallel light and then with a beam splitter to separate the light into two parts. One is used to illuminate the objects and the other one as the reference light. In our system, instead of utilizing beam expander to generalize parallel beam, the laser is directly produced by a fiber, which provides a spherical wave with a center in the out port of fiber. For this reason, our system contains less optical components so that the setup would be more compact. The only needed processing is to eliminate the second-order aberration caused by different distance between two path and the off-axis to a small extent. An experiment of aberration compensation by using principle component analysis is given, and the result shows that the system works well.
We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ∼3.7μm and an axial resolution of ∼5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.
Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illumination and a phase retrieval algorithm to surpass the diffraction limit of the objective lens. Specifically, FP captures a set of low-resolution (LR) images, under angularly varying illuminations, and stitches them together in the Fourier domain. However, because the requisite large number of incident illumination angles, the long capturing process becomes an obvious limiting factor. Furthermore, in order to acquire high-dynamic range images, the time can be increased several times over. In this work, utilizing the Hadamard code principle, we propose a highly efficient method, which applies coded multi-angular illumination for FPM, to shorten the exposure time of each raw image. High acquisition efficiency is achieved by employing an optimal multi-angular illumination scheme by using two set of Hadamard coded multiplexing patterns. Both simulation and experimental results indicate that the proposed multi-angular illumination process could shorten the acquisition time of conventional FPM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.