PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9117, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Printing the holographic stereogram (HS) is vulnerable to the vibration when the holographic film is exposed to the continuous wave laser beam compared to the pulsed laser. The continuous wave laser is required to be more longer exposed to the holographic film for forming the fringe pattern, hence the optical system is normally set up on the antivibration system such as optical table which could be a latent defect considering the commercially available printer. This paper covers the design of holographic stereogram printing system which is built upon the non-vibration environment where the ambient noises exist. In order to build a robust system under the common sources of vibration, we designed the optical system which can minimize the effects of ambient noise as well as reduce the optical vibrations. The main source of the noise comes from the stage that transfers the hologram plate hogel by hogel. In order to accelerate the film transportation, we devised and applied an anti-vibration algorithm which can reduce the vibration significantly and the open frame architecture as well. The holographic stereogram printing is conducted using the one-step full parallax stereograms which are generated by setting up the re-centered camera. The optical system features single signal beam converging module to minimize the optical components and tailored optical components. An open frame film stage is integrated into the HS system. For the experiments, the horizontal and full parallax 1mmx1mm, 50x50 and 100x100 hogels are printed to verify the proposed HS printing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of non-telecentric imaging systems in quantitative phase digital holographic microscopy introduces strong inaccuracies. We show that even negligible errors in the radius and center of curvature of the remaining quadratic phase factor introduce big errors in the numerical phase measurements. The errors depend on the position of the object in the field-of-view. However, when a telecentric imaging system is utilized for the recording of the holograms, the hybrid imaging method shows shift-invariant behavior, and therefore accurate quantitative phase imaging can be performed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate the depth measurement method of holographic images using integral imaging. The depth information of holographic images can be obtained with a single capture by conventional integral imaging pickup system composed of a micro lens array (MLA) and an image sensor. In order to verify the feasibility of our proposed method, an elemental image set of holographic images formed by a MLA was generated by a computer, and then refocused images at different planes were reconstructed numerically using computational integral imaging reconstruction (CIIR) technique for depth measurement. Note that we set the distance between MLA and image sensor as focal length of micro lens for large depth of focus. From the numerical results, we can measure the depth representation of holographic images successfully. However, refocused images from an optically captured elemental image set provide poor depth discrimination due to expected error in distance between MLA and image sensor. Only an object in a particular narrow depth range can be focused clearly when the image sensor is placed out of the MLA focal plane. The simulated results in this condition matched reasonably with the experiment result.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A horizontally scanning holographic display, which consists of an MEMS SLM, an anamorphic imaging system, and a horizontal scanner, enlarges screen size and viewing zone angle. The generation of grayscale 3D images was achieved by modulating the laser light illuminating the MEMS SLM. Because the MEMS SLM generates elementary holograms with a high frame rate, there is substantial overlap between elementary holograms. A different elementary hologram is displayed with a different laser power. Recently, the generation of 3D color images was also achieved by introducing RGB lasers. A different elementary hologram is displayed with a different color. Color images with a screen size of 6.2 in. and viewing zone angles of 15° (R), 12° (G), and 11° (B) were produced using a digitial micromirror device as the MEMS SLM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In modern optical systems discrete digital devices for measuring intensity distributions play an indispensible role. The intensity incident on the CCD or CMOS array is averaged over the spatial extent of each pixel for a given exposure time. Fluctuations in the power of a laser, vibrations on an optical table, and electronic noise from the digital sensor all contribute to some degree to a base-line noise level for a particular optical system. Hence we expect that the intensity value measured by each pixel will fluctuate over time. In this paper we investigate the effects of noise of cameras over time using a speckle field for a range of different camera parameters such as exposure time, gain factor, and light power. We then examine how this baseline noise level changes when the incident speckle field is mixed with a plane reference wave forming to form a hologram at the camera plane. We comment on our experimental results and how they apply to general optical systems that measure the phase distribution for a complex field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present our recent advances in the development of compact, highly portable and inexpensive wide-field interferometric modules. By a smart design of the interferometric system, including the usage of low-coherence illumination sources and common-path off-axis geometry of the interferometers, spatial and temporal noise levels of the resulting quantitative thickness profile can be sub-nanometric, while processing the phase profile in real time. In addition, due to novel experimentally-implemented multiplexing methods, we can capture low-coherence off-axis interferograms with significantly extended field of view and in faster acquisition rates. Using these techniques, we quantitatively imaged rapid dynamics of live biological cells including sperm cells and unicellular microorganisms. Then, we demonstrated dynamic profiling during lithography processes of microscopic elements, with thicknesses that may vary from several nanometers to hundreds of microns. Finally, we present new algorithms for fast reconstruction (including digital phase unwrapping) of off-axis interferograms, which allow real-time processing in more than video rate on regular single-core computers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speckle reduction in 3D displays by computational holography is investigated. As the important advantage of holographic 3D display is the capability of high-resolution display of deep 3D scene, the resolution of the image after applying speckle suppression is primarily discussed. Then we present a method for speckle reduction in electronic holographic display based on the recently proposed approach that utilizes time-multiplexing of holograms with sparse object points. The method is adapted to the technique for hologram calculation using ray-sampling plane. It becomes possible to apply speckle suppression to the holographic display of deep 3D scene. It was shown that smaller number of frames for time-multiplexing effectively reduces the speckle noise through the computer simulation. Additionally, angular-multiplexing technique is presented to reduce the number of frames for time-multiplexing, and the experimental results show the effectiveness of the proposed speckle suppression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose acceleration of color computer-generated holograms (CGHs) from three-dimensional (3D) scenes that are expressed as texture (RGB) and depth (D) images. These images are obtained by 3D graphics libraries and RGB-D cameras: for example, OpenGL and Kinect, respectively. We can regard them as two-dimensional (2D) cross-sectional images along the depth direction. The generation of CGHs from the 2D cross-sectional images requires multiple diffraction calculations. If we use convolution-based diffraction such as the angular spectrum method, the diffraction calculation takes a long time and requires large memory usage because the convolution diffraction calculation requires the expansion of the 2D cross-sectional images to avoid the wraparound noise. In this paper, we first describe the acceleration of the diffraction calculation using “Band-limited double-step Fresnel diffraction,” which does not require the expansion. Next, we describe color CGH acceleration using color space conversion. In general, color CGHs are generated on RGB color space; however, we need to repeat the same calculation for each color component, so that the computational burden of the color CGH generation increases three-fold, compared with monochrome CGH generation. We can reduce the computational burden by using YCbCr color space because the 2D cross-sectional images on YCbCr color space can be down-sampled without the impairing of the image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital holography, as any other coherent imaging modalities, is subject to speckle noise. Speckles may degrade significantly the image quality, therefore many optical and digital techniques were developed to suppress the speckles. In this paper we present a comparison between six digital speckle filtering techniques used for digital holography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-speed multicolor 3D motion-picture recording of a 3D object was experimentally demonstrated by using multiwavelength parallel phase-shifting digital holography. Parallel phase-shifting digital holography is a technique for obtaining the complex amplitude distribution of an object wave from a single hologram, based on space-division multiplexing of multiple phase-shifted holograms. The combination of parallel phase-shifting with angular multiplexing is proposed to capture multicolor information simultaneously using a monochromatic image sensor. 3D space, phase, and wavelength information is simultaneously reconstructed by recording a monochromatic hologram using an image sensor with polarization-detection function. Color 3D motion-picture recording of objects that move at the speed of more than 20 km/h was achieved at 20,000 frames per second. This result is the first experimental demonstration of multiwavelength parallel phase-shifting digital holography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A plano-convex electrode is presented for a liquid crystal lens array with a hexagonal arrangement, small inactive region, 30μm cell gap and low applying voltage. It uses circular curved electrodes to provide a smooth, controllable applied potential profile across the aperture to manage the phase profile.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To detect the 3D depth information of objects in a deep scene is not so easy due to the limited depth of field (DoF) of cameras. In this paper, we proposed a 3D depth map capturing system with high dynamic depth range (HDDR). Unlike conventional extending depth of field (EDoF) method, the HDDR method will not deteriorate the image quality. By imitating an active tunable m × n lens array focusing on a sequential of imaging planes, each object in the scene would be clearly captured by at least three elemental lenses. We estimated the elemental depth maps by using the method depth from disparity individually, and then fused them into one all-in-focus depth map. Comparing with the conventional 3D cameras, the working range of HDDR system with 3x3 camera array can be extend from 90cm to 165cm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Plenoptic cameras capture a sampled version of the map of rays emitted by a 3D scene, commonly known as the Lightfield. These devices have been proposed for multiple applications as calculating different sets of views of the 3D scene, removing occlusions and changing the focused plane of the scene. They can also capture images that can be projected onto an integral-imaging monitor for display 3D images with full parallax. In this contribution, we have reported a new algorithm for transforming the plenoptic image in order to choose which part of the 3D scene is reconstructed in front of and behind the microlenses in the 3D display process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel measurement method of multi-view 3D display for determining an optimum viewing distance (OVD) is proposed. Using this method, the OVD can be efficiently determined by analyzing ray tracing results from at least one view–point images of some local areas of a multi-view 3D display and position error of each view-point images formed from entire 3D display area can be also calculated depending on z-direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dr. Fumio Okano, a well-known pioneer and innovator of three-dimensional (3D) displays, passed away on 26 November 2013 in Kanagawa, Japan, at the age of 61. Okano joined Japan Broadcasting Corporation (NHK) in Tokyo in 1978. In 1981, he began researching high-definition television (HDTV) cameras, HDTV systems, ultrahigh-definition television systems, and 3D televisions at NHK Science and Technology Research Laboratories. His publications have been frequently cited by other researchers. Okano served eight years as chair of the annual SPIE conference on Three- Dimensional Imaging, Visualization, and Display and another four years as co-chair. Okano's leadership in this field will be greatly missed and he will be remembered for his enduring contributions and innovations in the field of 3D displays. This paper is a summary of the career of Fumio Okano, as well as a tribute to that career and its lasting legacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article proposes a 3D reconstruction method using multiple depth cameras. Since the depth camera acquires the depth information from a single viewpoint, it’s inadequate for 3D reconstruction. In order to solve this problem, we used multiple depth cameras. For 3D scene reconstruction, the depth information is acquired from different viewpoints with multiple depth cameras. However, when using multiple depth cameras, it’s difficult to acquire accurate depth information because of interference among depth cameras. To solve this problem, in this research, we propose Time-division multiplexing method. The depth information was acquired from different cameras sequentially. After acquiring the depth images, we extracted features using Fast Point Feature Histogram (FPFH) descriptor. Then, we performed 3D registration with Sample Consensus Initial Alignment (SAC-IA). We reconstructed 3D human bodies with our system and measured body sizes for evaluating the accuracy of 3D reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-interferometric intensity based methods of phase retrieval such as the transport of intensity (TI) employs a simple experimental technique for amplitude and phase reconstruction of a static object by capturing several diffraction patterns at different observation planes. The purpose of this work is to numerically and experimentally extend this technique to moving phase and amplitude objects. The simulation part is done based on solving the TI equation (TIE) using the Fast Fourier Transform (FFT) method, and the amplitude and the calculated phase in the detection plane is numerically back-propagated to the object plane using the paraxial transfer function. Furthermore, we illustrate how a static 3D phase and/or amplitude object can also be reconstructed tomographically by illuminating it at multiple angles. For illustration purposes, the object is mounted on a rotating stage and multiple diffraction patterns are captured for different angles and at different observation planes. The reconstructed optical fields are tomographically recomposed to yield the final 3D shape using a simple multiplicative technique. The tomographic technique can be generalized for the case of 3D moving objects. Finally, we have used TIE to determine the phase induced in a liquid due to heating by a focused laser beam, which causes self-phase modulation of the beam.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A holographic display which is capable of displaying floating holographic images is introduced. The display is for user interaction with the image on the display. It consists of two parts; multiplexed holographic image generation and a spherical mirror. The time multiplexed image from 2 X 10 DMD frames appeared on PDLC screen is imaged by the spherical mirror and becomes a floating image. This image is combined spatially with two layered TV images appearing behind. Since the floating holographic image has a real spatial position and depth, it allows a user to interact with the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes electronic holography output of three-dimensional (3D) video with integral photography as input. A real-time 3D image reconstruction system was implemented by using a 4K (3840×2160) resolution IP camera to capture 3D images and converting them to 8K (7680×4320) resolution holograms. Multiple graphics processing units (GPUs) were used to create 8K holograms from 4K IP images. In addition, higher resolution holograms were created to successfully reconstruct live-scene video having a diagonal size of 6 cm using a large electronic holography display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A 3D display system by using a phase-only distribution is presented. Especially, binary phase distribution is used to reconstruct a 3D object for wide viewing zone angle. To obtain the phase distribution to be displayed on a phase-mode spatial light modulator, both of experimental and numerical processes are available. In this paper, we present a numerical process by using a computer graphics data. A random phase distribution is attached to all polygons of an input 3D object to reconstruct a 3D object well from the binary phase distribution. Numerical and experimental results are presented to show the effectiveness of the proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we overview a high resolution three-dimensional (3D) holographic display using 2D images captured in an integral imaging system and dense ray resampling technique. Holograms are generated the resampled rays from the 2D images. This method can improve the display resolution because each object is captured in focus and light-ray information is interpolated and resampled with high density on ray-sampling plane located near the object. Numerical experimental results for different scenes show that the presented holographic display technique can reconstruct multiple objects at different depths with higher resolution compared to conventional integral imaging-based holographic displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Depth perception caused from the motion parallax, which was derived from the horizontally moving pickup device, was examined. The image sequences were captured to the real scene using only horizontally moving pickup device or horizontally moving pickup device with setting the fixation point. As results, the depth perception was a relatively high performance in the case of horizontally moving pickup device with setting the fixation point. For the examination of this result, the displacement and the differential displacement on the pickup device and the motion perception for the visual stimuli, which means the captured image sequences, are investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An eye-tracked head-mounted display (ET-HMD) system is able to display virtual images as a classical headmounted display (HMD) does, while additionally tracking the gaze direction of the user. An HMD with fullyintegrated eyetracking capability offers multi-fold benefits, not only to fundamental scientific research but also to emerging applications of such technology. A key limitation of the state-of-the-art ET-HMD technology is the lack of compactness and portability. In this paper, we present an innovative design of a high resolution optical see-through ET-HMD system based on freeform optical technology. A prototype system is demonstrated, which offers a goggle-like compact form factor, non-obstructive see-through field of view, true high-definition image resolution for the virtual display, and better than 0.5 arc minute of angular resolution for the see-through view. We will demonstrate the application of the technology as an assistive and augmentative communication (AAC) device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users’ intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have devised full resolution stereoscopic television system incorporating both patterned retarder and active glasses. Selective vision of the left image by the left eye and the right image by the right eye is achieved by conventional combination of patterned retarder and left and right polarized filters. Full resolution is provided by active components of glasses acting as a switchable beam displacer. Pairs of line-interleaved images are displayed on LCD screen sequentially at a frame-rate 120 Hz. Viewer can see full resolution stereoscopic images as if they are displayed in interlaced manner by active glasses. Active glasses are flicker free. Patterned retarder defines crosstalk level in a range below 1%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The moiré effect is an optical phenomenon which has a negative influence to the image quality; as such, this effect should be avoided or minimized in displays, especially in autostereoscoipic three-dimensional ones. The structure of the multiview autostereoscoipic displays typically includes two parallel layers with an integer ratio between the cell sizes. In order to provide the minimization of the moiré effect at finite distances, we developed a theory and computer simulation tool which simulates the behavior of the visible moiré waves in a range of parameters (the displacement of an observer, the distance to the screen and the like). Previously, we have made simulation for the sinusoidal waves; however this was not enough to simulate all real-life situations. Recently, the theory was improved and the non-sinusoidal gratings are currently included as well. Correspondingly, the simulation tool is essentially updated. In simulation, parameters of the resulting moiré waves are measured semi-automatically. The advanced theory accompanied by renewed simulation tool ensures the minimization and make it convenient. The tool run in two modes, overview and detailed, and can be controlled in an interactive manner. The computer simulation and physical experiment confirm the theory. The typical normalized RMS deviation is 3 - 5%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital holographic microscope is an ideal tool for quantitative phase contrast imaging of living cells. It yields the thickness distribution of the object under investigation from a single hologram. From a series of holograms the dynamics of the cell under investigation can be obtained. But two-beam digital holographic microscopes has low temporal stability due to uncorrelated phase changes occurring in the reference and object arms. One way to overcome is to use common path techniques, in which, the reference beam is derived from the object beam itself. Both the beams travel along the same path, increasing the temporal stability of the setup. In self-referencing techniques a portion of the object beam is converted into reference beam. It could be achieved by example, using a glass plate to create two laterally sheared versions of the object beam at the sensor, which interfere to produce the holograms/interferograms. This created a common path setup, leading to high temporal stability (~0.6nm). This technique could be used to map cell membrane fluctuations with high temporal stability. Here we provide an overview of our work on the development of temporally stable quantitative phase contrast techniques for dynamic imaging of micro-objects and biological specimen including red blood cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To expand the suitable stereoscopic viewing zone on depth directional and remove the crosstalk induced by the structures of the existing slanted lenticular lens sheet, Segmented Lenticular lens having Varying Optical Power (SL-VOP) is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simulator which can define super-multiview condition is introduced. This simulator can direct two different images to each eye of a viewer. The images are not only different view images but also images of different image cells in different positions in the viewing zone of a contact-type multiview imaging system having parallel configuration. This simulator will inform many conditions and requirements for multiview images to be a super-multiview. The presence of continuous parallax, the possibility of monocular depth sensing, required number of different view images, allowed pattern of different view image mixing, the optimum disparity between images are several examples of the conditions and requirements to be defined by the simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Present generations of 3D displays including stereoscopic and autostereoscopic displays have very limited number of
views, and thus limited parallax. In contrast, the emerging light field (LF) displays support hundred(s) of views with
acceptable spatial resolution thereby enabling a more realistic representation of 3D scenes. This is accomplished for the
price of high data throughput, complex data acquisition and a high demand of computational power. Thus, the
optimization of the content representation is of crucial importance for the performance of the whole display system. In
this paper, we discuss the requirements for LF based processing of 3D content for representation on the new generation
of ultra-realistic LF displays. We analyze the overall processing chain from acquisition through LF based modeling and
representation to visualization on the considered displays. By analyzing the visualization capabilities of a given LF
display using spatial and frequency domain analysis, we draw guidelines on how to properly acquire the required data
and repurpose it based on the targeted display. We show that by taking into account the properties of the display during
scene sensing and during LF processing, a good visual representation of 3D content on a given display can be achieved
with a minimalistic capture setup.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let’s us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In integral holography, the reconstructed 3D image quality is affected by lenses positional errors in micro-lens array. We analyzed the spatial distortion effects in reconstructed 3D integral Fourier holographic image which are caused by misarrangements of elemental lenses in micro-lens array. Then, an intermediate projection views generation method is used to eliminate the spatial distortion effects in reconstruction. This method provides a solution to adjust the lens-array manufactured errors in realistic integral holographic imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a Bayesian framework to infer depths of object surfaces in a 3D integral imaging system. In a 3D integral imaging system, the depth of Lambertian surfaces can be estimated from the statistics of the spectral radiation pattern. However, the estimated depth may contain errors due to system uncertainties. To better infer the depth information, we utilize a Bayesian framework and a Markov Random Field (MRF) model with the knowledge of the statistical information of object intensities and the assumption that object surfaces are smooth. In the proposed method, we combine a Bayesian framework and the characteristics of 3D integral imaging systems to infer the depths. Simulated and experimental results illustrate the performance of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a three-dimensional (3D) object recognition approach via computational integral imaging and scale invariant feature transform (SIFT) that can be invariance to object changes in illumination, scale, rotation and affine. Usually, the matching between features extracted in reference object and that in computationally reconstructed image should be done for 3D object recognition. However, this process needs to alternately illustrate all of the depth images first which will affect the recognition efficiency. Considering that there are a set of elemental images with different viewpoint in integral imaging, we first recognize the object in 2D image by using five elemental images and then choose one elemental image with the most matching points from the five images. This selected image will include more information related to the reference object. Finally, we can use this selected elemental image and its neighboring elemental images which should also contain much reference object information to calculate the disparity with SIFT algorithm. Consequently, the depth of the 3D object can be achieved with stereo camera theory and the recognized 3D object can be reconstructed in computational integral imaging. This method sufficiently utilizes the different information provided by elemental images and the robust feature extraction SIFT algorithm to recognize 3D objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an extended viewing angle holographic three-dimensional display system using optical fiber arrays backlight and a pupil-tracking technique. One of the limitations in implementing a wide viewing-angle holographic three dimensional display is the restrictive space-bandwidth product of the spatial light modulator. In our proposed method, the optical fiber arrays backlight with pupil-tracking system enables to enlarge the static viewing angle of the holographic reconstruction using only one spatial light modulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SLM with very fine pixel pitch is needed for the holographic display system. Among various kinds of SLMs, commercially available high resolution LCoS has been widely used as a spatial light modulator. But the size of commercially available LCoS SLM is limited because the manufacturing technology of LCoS is based on the semiconductor process developed on small size Si wafer. Recently very high resolution flat panel display panel (~500ppi) was developed as a “retina display”. Until now, the pixel pitch of flat panel display is several times larger than the pixel pitch of LCoS. But considering the possibility of shrink down the pixel pitch with advanced lithographic tools, the application of flat panel display will make it possible to build a SLM with high spatial bandwidth product. We simulated High resolution TFT-LCD panel on glass substrate using oxide semiconductor TFT with pixel pitch of 20um. And we considered phase modulation behavior of LC(ECB) mode. The TFT-LCD panel is reflective type with 4-metal structure with organic planarization layers. The technical challenge for high resolution large area SLM will be discussed with very fine pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we use digital projection moiré (DPM) method to analyze the non-linear behavior of sandwich beams with compliant foam core. These cores are highly flexible with respect to the face sheets and their behavior is associated with localized effects in the form of localized displacements and stresses, which in turn influence the overall behavior of sandwich beams. In this study we compare the results of three point bending with Finite Element Analysis (FEA) results that are obtained from the ABAQUS finite element code. We have shown that DPM experimental results are in good agreement with FEA simulations. It is suggested that the presented method can be used as a simple, advantageous and user friendly whole-field testing technique for many applications in evaluation of composite materials and sandwich structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an overview of a colorful photon-counting integral imaging system using Bayer elemental images for 3D visualization of photon limited scenes. The color image sensor with a format of Bayer color filter array, i.e., a red, a green, or a blue filter in a repeating pattern, captures elemental image set of a photon limited three-dimensional (3D) scene. It is assumed that the observed photon count in each channel (red, green or blue) follows Poisson statistics. The reconstruction of 3D scene with a format of Bayer is obtained by applying computational geometrical ray back propagation algorithm and parametric maximum likelihood estimator to the photon-limited Bayer elemental images. Finally, several standard demosaicing algorithms are applied in order to convert the 3D reconstruction with a Bayer format into a RGB per pixel format. Experimental results demonstrate that the gradient corrected linear interpolation technique achieves better performance in regard with acceptable PSNR and less computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel method to analyze facial expressions from 3D shape using radial curves and curve based geometric features. Curve based representation of 3D facial shape and corresponding geometric features overcome the curse of dimensionality providing a means for fast and automatic classification and comparison of 3D facial shapes. Our proposed curve based geometric features effectively capture local variations and classify facial expressions from 3D facial shapes. A multiclass feature selection technique is used to identify the most effective features that localize the effective regions of the face. Six basic facial expressions are classified using a publicly available 3D facial expression dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holography is one method to record the information from a real scene, but it requires coherent illumination and the lack of resolution of the pick-up device may strongly limit the size of the object recorded. It is also possible to generate a hologram of a real scene in incoherent illumination condition by using techniques likes integral imaging or multiple imaging, but the spatial resolution provided from these methods is usually quite poor. Hologram can be made from a virtual scene with a computer, but the heavy computational load limit the size of the scene, and it is difficult to create precise models of complicated objects. In this paper, we analyze the different techniques used to pick-up 3D data from a real object such as holography or integral imaging. And then, we present the first result of a simulator which is developed to evaluate the key parameters of hologram data according to the pick-up system. Preliminary results may be possible to evaluate their performance and to choose the optimal method one should use according to the resolution, the depth of field or the angle of view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new method for the occluded object visualization using multiple Kinect sensors. The quality of occluded object reconstructed from conventional reconstruction method with elemental images captured by common camera arrays in integral imaging is usually degraded due to the existence of occlusion object in the 3D space which is a common case in reality. Even though some occlusion removal algorithms were proposed to improve the resolution of reconstructed occluded object, all of them are very time-consuming and make the 3D object reconstruction process inefficient. On the contrary, the Kinect sensors not only provide RGB images but also depth images. Since the depth and RGB color image is captured by two different cameras on Kinect sensor at different location, the depth image should be mapped to the color image. After image mapping (or registration), the same pixel location on depth and RGB color image would represent the same 3D space point. As result, the depth image after mapping can be used to remove the occlusion object in the corresponding RGB image (or elemental image) before doing object reconstruction in integral imaging. Consequently, the occluded object can be visualized with a high quality and will not take a long time as other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.