PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131001 (2020) https://doi.org/10.1117/12.2564447
This PDF file contains the front matter associated with SPIE Proceedings Volume 11310, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131002 (2020) https://doi.org/10.1117/12.2546941
We propose a design of a retinal-scanning-based near-eye display for augmented reality. Our solution is highlighted by a laser scanning projector, a diffractive optical element, and a moist eye with gradient refractive indices. The working principles related to each component are comprehensively studied. Its key performance is summarized as follows. The field of view is 122°, angular resolution is 8.09′, diffraction efficiency is 57.6%, transmittance is 80.6%, uniformity is 0.91, luminance is 323 cd/m2, modulation transfer functions are above 0.99999 at 3.71 cycle/degree, contrast ratio is 4878, and distortion is less than 24%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131003 (2020) https://doi.org/10.1117/12.2542365
We demonstrate an optical chromatic aberration correction method for virtual reality (VR) displays using cost-efficient flat optics. The fabricated ultra-broadband liquid crystal thin-film polymer lens is based on the Pancharatnam-Berry phase and manifests over 97% first-order diffraction efficiency over the display spectrum. By cascading the fabricated polymer lens with the conventional Fresnel VR lens, the lateral color breakup in the near-eye display system can be reduced by more than 10 times. Both optical designs and experimental results are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131004 (2020) https://doi.org/10.1117/12.2545972
We present a design of a contact lens display, which features an array of collimated light-emitting diodes and a contact lens, for the augmented reality. By building the infrastructure directly on top of the eye, eye is allowed to move or rotate freely without the need of exit pupil expansion nor eye tracking. The resolution of light-emitting diodes is foveated to match with the density of cones on the retina. In this manner, the total number of pixels as well as the latency of image processing can be significantly reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131006 (2020) https://doi.org/10.1117/12.2547647
Holography has demonstrated potential to achieve a wide field of view, focus supporting, optical see-through augmented reality display in an eyeglasses form factor. Although phase modulating spatial light modulators are becoming available, the phase-only hologram generation algorithms are still imprecise resulting in severe artifacts in the reconstructed imagery. Since the holographic phase retrieval problem is non-linear and non-convex and computationally expensive with the solutions being non-unique, the existing methods make several assumptions to make the phase-only hologram computation tractable. In this work, we deviate from any such approximations and solve the holographic phase retrieval problem as a quadratic problem using complex Wirtinger gradients and standard first-order optimization methods. Our approach results in high-quality phase hologram generation with at least an order of magnitude improvement over existing state-of-the-art approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131007 (2020) https://doi.org/10.1117/12.2546445
We introduce a portable projection mapping device (PPMD) for extracting and projecting implant geometric data in single-stage cranioplasty surgery. Our system consists of a real-time tracking device, which tracks the reference markers fixed on the skull, and a digitizing probe for localizing the contour of the cranial defect for modifying the oversized customized cranial implant (CCI). The feature points on the surface of the skull are registered to the reconstructed surface model of the skull obtained from CT image. The CT model is then projected on the surface of the skull. The digitizer is used to outline the geometry of the skull resection with sub-millimeter tracking accuracy. The proposed PPMD and its associated pipeline enable intraoperative cranial implant modification with image feedback to surgeons for the very first time. Our proposed PPMD system, an inexpensive device can also be used for other medical augmented reality applications, especially when direct projection mapping is preferred over the use of wearable head-mounted displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131008 (2020) https://doi.org/10.1117/12.2548336
This paper presents some results of a larger study about vision geometry, carried out between 2014 and 2016 at the Lisbon School of Architecture, considering eye-tracking technology as an effective way of studying human vision. The methodology relied on portable eye-tracking equipment to analyze 3D immersive visual perception. 30 observers conducted 120 analyses of four different three-dimensional architectural spaces. The 120 samples allowed the analysis of 60,000 video frames to understand different kinds of elements that can be used to describe and study visual information. Quantitative and qualitative results are presented in the form of graphics to better understand eye movements in the perception of three-dimensional visual spaces, with a particular focus on macro-saccadic movements. The purpose of this work is to examine how the technology works, the possibilities it offers, and its limitations. The paper presents results in a way that seems trustworthy to work with at present, in order to obtain insights of scientific value that can point towards future possible resolutions of current methodological and technological issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131009 (2020) https://doi.org/10.1117/12.2543257
We demonstrate that transverse chromatic aberration (TCA) measurements can be used to determine the eyebox of a virtual reality head-mounted display using a digital test pattern, which consists of red, green, and blue bars placed horizontally and vertically at ±5°, ±10°, ±15° in the field of view. The pattern also features a white cross in the center of the field of view to determine the horizontal and vertical resolution. This provides simultaneous measurements of the TCAs and resolution for a given position of the camera in the eyebox. A map of the eyebox was generated by raster-scanning the position of the camera over the eyebox. The results for the Oculus Rift show that resolution has approximately a quadratic dependence on the position in the eyebox whereas the response to TCAs is linear. Therefore, TCA provides a more sensitive eyebox measurement at small displacements allowing for repeatable centering within 0.5 mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100A (2020) https://doi.org/10.1117/12.2543416
Previous analysis of 3D display systems hardly provides assessment of the perceived retinal image quality and the ability to render depth and stimulate a proper accommodative response. Moreover, few works ever provide the analysis of 3D displays for the users with eye refractive errors. To solve those problems, in this paper, we present a general framework where a schematic eye model is implemented as an integral part of 3D display systems to characterize the perceived retinal image and visual responses. By utilizing this framework, we demonstrate the retinal image results for natural viewing and stereoscope for both normal vision and 1 diopter myopia condition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100B (2020) https://doi.org/10.1117/12.2544979
Augmented and Mixed Reality promises another leap forward in productivity and lifestyle, offering benefits with a magnitude and impact matching that of the introduction of smartphones. However, to enable this, many significant technical challenges must be overcome. Here we review the state of the art, identifying key challenges established in the literature to consumer-wearable devices. In particular, we discuss: vergence-accommodation conflict (the detrimental effect of overlays that are optically inconsistent with the real-world objects they augment), the need to present overlays visible against the vast dynamic range that the human eye can process, and constraints surrounding the scalability and cost of manufacture of optics. We demonstrate that digital holography as a display mechanism not only provides an effective solution to the aforementioned challenges, but also that various hardware requirements become far less stringent. By operating in the Fourier Domain, holographic displays are freed of design compromises driven by the constraints of a pixelated screen. However, the computational cost of CGH has previously been considered prohibitive. We demonstrate that for real-world applications the latest advancements made by VividQ deliver sufficient focal accuracy at a computational cost within reach of personal mobile devices. We prove that it is now possible to clear the barriers preventing mass adoption of Augmented and Mixed Reality products with Computer-Generated Holography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100D (2020) https://doi.org/10.1117/12.2546483
Augmented reality (AR) continues to be heavily studied as a research topic for potential medical use. The goal of seeing the patient’s anatomy below the surface of the human body has always been thought of as the ideal surgical navigation tool. Rather than observing medical imaging, such as computed tomography (CT) or magnetic Resonance (MR) images on a monitor, hospital personnel would be able to see patient specific pathologies through Augmented Reality (AR) glasses. Neurosurgery has commonly been a field of choice for AR integration because of the many needs that can potentially be met. Understanding AR in the neurosurgical Operating Room (OR) does pose some benefits well as concern in terms of human computer interaction (HCI). One of the core concepts of HCI is the idea of user-centered design. While one aims to create an intuitive interface for the user-group, introducing AR into the OR can increase cognitive overload and inattentional blindness if executed improperly without considering the full use-case and all stakeholders. A common application of neuro-navigation is in spinal surgery, which, while incredibly accurate, disrupts OR workflow. These devices drastically improve patient outcomes yet are seldom employed because of these disruptions. HCI concepts can better integrate AR into the OR to solve pitfalls observed in modern neuro-navigation, and gives designers, engineers and surgeons the necessary tools to develop AR solutions. Our goal is to thoroughly analyze the OR workflow such that AR can be effectively incorporated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100E (2020) https://doi.org/10.1117/12.2546605
Augmented reality (AR) has potential for increasing safety in marine navigation, but current-generation AR devices have significantly limited field-of-view, which could make them impractical for such usage. We developed a virtual reality ship simulator that displays AR navigation overlays across variable fields-of-view. This paper presents a human factors study, in which participants (including experienced boaters) piloted a virtual vessel while navigating using combinations of a traditional electronic chart display and AR overlays presented at various fields-of-view. Eye movements were tracked to discover how AR and restricted fields-of-view effect navigational target finding, safety, and situational awareness. The results indicate that AR provides significant benefits that can promote safer marine navigation, and that the field-of-view of an AR device has some significant and predictable effects on its usefulness for navigational tasks. Increasing field-ofview capabilities in future AR hardware is expected to improve AR’s usefulness for marine navigational tasks, however, this research shows that current-generation AR hardware (such as HoloLens 2) may already be suitable for this application, as most of the significant benefits were gained by providing AR overlays regardless of their field-of-view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100F (2020) https://doi.org/10.1117/12.2544737
The eye-box expansion method using the merging of waveguide and HOE (holographic optical element) is presented. Using the waveguide with the refractive index of 1.7, the wide FoV (field of view) that is up to 60° is achieved. Full color and wide FoV are obtained using 2 waveguides. Projection optical system based on Scheimpflug principle is proposed and designed to compensate large-scale off-axis HOE aberrations. In order to enhance image quality, the projection system is precisely simulated and the grating pitch and alignment are calculated to increase the eye-box and uniformity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100I (2020) https://doi.org/10.1117/12.2551872
Wide field of view color waveguide display reference designs for low-cost consumer AR displays using high index modulation photopolymer and liquid crystal material for providing compact wide-angle, displays are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100J (2020) https://doi.org/10.1117/12.2547733
SA Photonics and SL Process have developed a new type of electronic see-through augmented reality head mounted display. Our display utilizes a very compact freeform prism eyepiece that has almost no peripheral obscuration. This provides the wearer with the advantages of an electronic see-through system (occlusion, operation in high ambient environments, high contrast imagery, image enhancement, image export) but in a system that also allows natural vision of the real world outside the field of view of the eyepiece. As such, the user sees a seamless view of the entire real world, with augmentation in the central 62-degree diagonal field of view. The cameras for the see-through imagery are “hidden” behind the eyepieces, so they do not block the view of the real world, providing less than a 10% obscuration of the total visual field, much less than any other electronic see-through system and even less than many optical see-through systems. In addition to providing the see-through imagery, these cameras and SL Process’ innovative video processing architecture provide six degrees of freedom head tracking as well as hand tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100K (2020) https://doi.org/10.1117/12.2547354
Augmented reality (AR) offers many opportunities to make our lives easier. One example is to help us navigate handsfree and eyes-out in novel or unprepared environments. Cell phone-based Global Positioning System (GPS) applications have relieved us of some of the more burdensome tasks of map reading and route learning, but over-reliance on this technology has been shown to provide an impoverished understanding of the environment. What happens then when GPS is not available or the signals are compromised, as when one is indoors, in deep urban canyons, or in heavy forest, or when GPS is purposely jammed? This review will examine the psychological mechanisms of human wayfinding and navigation, towards an understanding of how we achieve Survey knowledge of an environment. We also look at examples of animals who navigate long distances using the earth’s magnetic field, celestial cues and polarized light from the sun, moon and stars, as well as odometry, optical flow and landmark recognition. From this bio-inspired perspective, we discuss requirements for a hybrid, multimodal navigation and tracking system for a targeted set of users who operate in high stress environments. What we find is that a combination of both Allocentric (external) and Egocentric (internal) navigation cues are required; and because Allocentric cues may be intermittent or unreliable, good path integration is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100M (2020) https://doi.org/10.1117/12.2543051
A compact lightweight AR headset architecture is presented employing a semi-transparent Fresnel mirror combiner. As compared to a grating combiner this design approach minimizes diffractive color aberrations by working in a high order blazed grating regime. Two different technologies, gray scale lithography as well as anisotropic wet-chemical etching of silicon wafers were evaluated for manufacturing of the Fresnel element. A first monocular demonstrator was assembled and analyzed analytically, numerically as well as experimentally. A crucial point in the manufacturing of the Fresnel combiner element was the high degree of surface shape accuracy of the working Fresnel edges as well as the embedding polymer layer to ensure optical imaging quality. Currently, only the wet-chemical etching technology could provide sufficiently good results. Two main reasons for aberrations could be traced down: grating effects of the Fresnel structure and asymmetric refraction at the embedding layer. As a result, different kinds of color aberration will be generated. Additionally, the asymmetric refraction causes different types of distortion. Two approaches for compensating color aberrations could be derived: optical pre-compensation of the lateral color caused by the embedding layer as well as optimizing the Fresnel structure to minimize grating effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100N (2020) https://doi.org/10.1117/12.2543264
Waveguide or lightguide has the advantages of light weight, small volume and relatively high efficiency and thus is considered as one of the promising solutions to the optical combiner required for optical see-through augmented reality displays. Although there exist a few commercial products utilizing lightguides as the optical combiners, few works discussed the potential image artifacts caused by the ray bundle collimation errors in lightguides. In this paper, we discussed three causes of collimation errors in the geometrical waveguide: lightguide manufacture tolerance, change of image focal depth, and curved lightguide substrate. Based on the simulation and experiment results of the first two causes, we demonstrated two different types of image artifacts in the geometrical-waveguide-based AR displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100P (2020) https://doi.org/10.1117/12.2544479
The Screen Door Effect (SDE) is a common complaint amongst users of Virtual Reality (VR) headsets. SDE is characterized by a mesh-like artifact, much like looking through a screen door, on an image due to the space between pixels having a large enough angular size to be resolved through the viewing optics. Current generation VR displays exhibit SDE, distracting the user and breaking immersion. There are solutions currently in use in commercial VR headsets that reduce SDE in the form of dispersive elements placed in front of the display that effectively blur images from the display. However, these solutions only reduce the SDE, and functionally the blur is equivalent to reducing the quality of the optical stack. A method of image enhancement is to "shake" the display using piezo actuators, effectively shifting the pixels to enhance the image. This method can increase the apparent size of the pixels, reducing the gaps between pixels and leading to a reduction of SDE. In this paper, we will explore the limits of SDE reduction using this pixel shifting method with a square pixel architecture on a 9.4 μm pixel pitch RGBW display, and how that generalizes to other display architectures. We will find how different stimuli change with screen door reduction, analyzing solid color, text, and natural scenes. We will also explore what parameters to use in order to minimize SDE, adding to the other benefits of this solution and generalizing the parameters to all displays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100S (2020) https://doi.org/10.1117/12.2546613
Ensuring the quality and accuracy of AR, VR, and MR displays requires precise measurement and qualification. Because these HMDs (head-mounted displays) fill a significant portion of the eye’s field of view (FOV), (typically 10-50 degrees for AR and 100-120 degrees for VR) minor defects and irregularities can be observed by a device user, potentially interfering with functionality and user experience. The available equipment on the market for testing AR/VR/MR displays may be similar, however, optical differences between AR/MR and VR systems require unique setups of test equipment to ensure that measurement and qualification will replicate the human visual experience. This paper provides a brief overview of display metrology requirements common to all types of AR/MR/VR displays and looks particularly at the unique requirements of testing AR and MR displays. For example, AR devices typically have a narrower FOV than immersive VR display headsets, and AR/MR virtual images are often (with optical see-through devices) projected onto a transparent surface with the background environment still visible to a user, making brightness and the resulting contrast critical performance parameters of the display. Topics covered in this paper include: unique optical configurations, performance parameters and usage environments specific to AR/MR devices; how to address the challenges of AR/MR display testing; specifications of photometric/colorimetric imaging systems (such as imaging resolution) used to measure and qualify AR/VR/MR displays. The paper also includes a discussion of recent research into testing new types of AR optical architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100T (2020) https://doi.org/10.1117/12.2542699
Eye-tracking hardware and software are being rapidly integrated into mixed reality (MR) technology. Cognitive science and human-computer interaction (HCI) research demonstrate several ways eye-tracking can be used to gauge user characteristics, intent, and status as well as provide active and passive input control to MR interfaces. In this paper, we argue that eye-tracking can be used to ground MR technology in the cognitive capacities and intentions of users and that such human-centered MR is important for MR designers and engineers to consider. We detail relevant and timely research in eye-tracking and MR and offer suggestions and recommendations to accelerate the development of eye-tracking-enabled human-centered MR, with a focus on recent research findings. We identify several promises that eye-tracking holds for improving MR experiences. In the near term, these include user authentication, gross interface interactions, monitoring visual attention across real and virtual scene elements, and adaptive graphical rendering enabled by relatively coarse eyetracking metrics. In the far term, hardware and software advances will enable gaze-depth aware foveated MR displays and attentive MR user interfaces that track user intent and status using fine and dynamic aspects of gaze. Challenges, such as current technological limitations, difficulties in translating lab-based eye-tracking metrics to MR, and heterogeneous MR use cases are considered alongside cutting-edge research working to address them. With a focused research effort grounded in an understanding of the promises and challenges for eye-tracking, human-centered MR can be realized to improve the efficacy and user experience of MR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100U (2020) https://doi.org/10.1117/12.2543346
In display projection applications, micron scale light emitting diode (μLED) array displays present an alternative to conventional lamps as light sources, with the advantage of higher efficiency and smaller form factor (particularly if the array is monolithic). However, μLEDs without inbuilt optics produce an emission pattern similar to a Lambertian distribution. By employing a pair of microlens arrays aligned to the μLED array to collimate the light, and a relay lens to focus the light onto a display panel, we can produce a display that both serves the same function as a traditional illuminator such as one which might be found in an LCOS or a DMD display, and utilises the addressing of the array to generate light over the desired display area only (zoning), rather than having the entire array be illuminated when the display is active. Using raytracing, we find that such system is limited by the performance of small μLEDs needed and the manufacture of the associated micro-optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100V (2020) https://doi.org/10.1117/12.2544452
We present a holographic method for augmented reality near-eye display system based on the projection of a digital amplitude-only computer-generated hologram (AO-CGH) on a DMD. The operating principle is based upon Fresnel holography and spatial light modulator encodings. The resultant system can display a high-resolution 3D image with accurate depth cues. Furthermore, we fully utilize the diffraction bandwidth of the hologram by eliminating the DC and conjugation noise, providing a simple solution to a long-standing problem in holographic displays. As a result, our system is low cost and compact, lending credence to consumer AR applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100W (2020) https://doi.org/10.1117/12.2545147
A single Digital Micromirror Device with a single illumination source projects multiple, independent patterns into corresponding directions across a nearly-doubled angular extent by time multiplexing and by nanosecond illumination pulse synchronization for a binary patterned programmable blazed grating. The resulting “Angular Spatial Light Modulator” (ASLM) system nearly-doubles the étendue of a DMD-type SLM and creates a multiplication factor for the output pixel count and effective pixel density. We demonstrate an extended FOV display, a light-field projector, and a multi-view display which can be implemented into AR/VR systems. We present an implementation update using the DLP7000 DMD, increasing output pixel count by and effective pixel density orders-of-magnitude beyond traditional SLM systems while achieving an extended field-of-view and/or eye-box size due to the increased étendue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yang Zhao, Nathan Matsuda, Xuan Wang, Marina Zannoli, Douglas Lanman
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100X (2020) https://doi.org/10.1117/12.2546687
The sensitivity of the human visual system to brightness and contrast is orders of magnitude higher than what current neareye displays can emulate. Although high brightness and contrast, which in combination is defined as high dynamic range (HDR) in this work, are critical to the realism provided by near-eye displays, these parameters are often compromised to achieve other major performance targets, such as resolution, field of view, form factor, and power consumption. Previously, various efforts from industry and academia resulted in HDR direct-view displays, and sometimes HDR near-eye display test beds. This work explores different technical approaches to integrated display systems with high brightness and contrast. HDR near-eye display test beds and form-factor-optimized prototypes were built based on dual modulation, where HDR displays were created by optically conjugating two low dynamic range light intensity modulators, in conjunction with high contrast viewing optics. The viewing optics are custom designed to avoid Fresnel lenses and use only refractive surfaces without any slope discontinuity to minimize contrast reduction from stray light. The dual modulation display system followed optical architectures similar to previous work by combining projectors and LCD display panels. However, the display system was designed and optimized for miniaturization by using folded optics to meet a head-mounted form factor. The challenges in system architecture, display technologies, form factor reduction, optical design, as well as brightness and contrast metrologies are discussed in detail.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113100Z (2020) https://doi.org/10.1117/12.2561859
We report on Quantum Photonic Imager (QPI) device, which is comprised of a spatial array of digitally addressable multicolor micro-scale pixels, wherein each pixel is a vertical stack of Red, Green, Blue light emitting diodes (LEDs) with RGB light emission sharing the same optical aperture. Starting with Blue, Green and Red epi-wafers first processed to create any desired size micro-LED arrays with 5-10 μm pixel pitch (pixelated), each of the 3 processed epi-wafers are sequentially bonded to a single receiving handle wafer followed by substrate removal and backside process to create a handle wafer with Blue, Green and Red micro-LED pixel arrays stacked vertically on top of each other. The handle wafer, which encapsulates stacked RGB pixel array, is also monolithically pre-processed to incorporate micro-scale pixel level optical elements array that is designed to collimate and directionally modulate the multi-color light emitted from the individual pixels of the LED array. A proprietary CMOS image processor is then bonded to the handle wafer combining the vertically stacked arrays of micro-scale pixel-level optics and LEDs. The QPI device alleviates inefficiencies associated with spatially or temporally multiplexed color pixel architectures, enabling high pixel density leading to small form-factor display system design. Low power display system operation is enabled by the QPI device. Small form-factor multi-color “wearable” AR displays with sub-1W power consumption utilizing QPI optically coupled to the edge of the AR combining lens have been demonstrated. Additionally, QPI enabled compact light field displays, head-up displays and pico projectors have also been demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fabrication Processes, Materials, and Design Tools for AR
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131010 (2020) https://doi.org/10.1117/12.2543692
Wafer-level nanoimprint lithography (NIL) has increasingly become a key enabling technology to support new devices and applications across a wide range of markets. Leading manufacturers of augmented reality (AR) devices, optical sensors and biomedical chips are already utilizing NIL and realizing the benefits of this technology, including the ability to mass manufacture micro- and nano-scale structures down with a maximum degree of freedom for the device dimensions. Another key advantage of this replication based technology is, given by the fact that even complex structures which require precise and time consuming fabrication methods can be transferred to mass manufacturing in an efficient semiconductor manufacturing line. Additionally, for many devices especially for optical applications the replicated layer can be directly used as functional layer in the product. Today NIL is considered as decisive process step for a number of emerging products, including AR waveguides. With increasing volumes the scaling of the production lines is crucial for most economical implementation of NIL. In particular for scaling to production lines using 200mm or even 300mm wafer sizes, the whole process chain has to be established. This is in particular a focus for AR devices requiring highly complex structures with tight specifications. Thus best efforts for master fabrication are crucial to obtain best performing devices. For smaller substrates, typically full area masters are used to manufactured and used for the NIL process. However, as the masters are mainly fabricated by sequential processes the costs scale with the pattern area. For 200mm and 300mm it has been proven to be viable option to start with single high-quality devices and scale them by step and repeat (SR) NIL to fully populated waferscale masters and subsequently to use those for volume manufacturing on wafer-level. The wafer-level production itself requires then reliable replication of working stamps and wafer level nanoimprinting of these multiple devices on a single wafer. As a result it is key for the high volume manufacturing to have a thorough understanding of all required pattering and replications steps to enable these large area manufacturing lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
James A. Monroe, Jeremy S. McAllister, David S. Content, Jay Zgarba
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131013 (2020) https://doi.org/10.1117/12.2546869
Temperature changes can detrimentally affect an optic’s performance due to changes in radius of curvature, thickness, and index of refraction. This is a particularly tricky problem when combinations of these changes produce a decrease in focus with increasing temperature, such as infrared lens materials. Current solutions include active focusing mechanisms and nesting tubes of different materials that cancel each other’s thermal expansion, but these solutions increase size and mass. ALLVAR alloys are the only metals that shrink when heated and expand when cooled, known as negative thermal expansion (NTE), making them a unique solution to this thermal focus shift problem. They can exhibit NTE down to - 30x10-6 °C-1. This unique property opens the opto-mechanical design window for athermalized optics with decreased size and weight. This paper discusses the potential for ALLVAR Alloys to reduce the size of passively athermalized infrared optics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131016 (2020) https://doi.org/10.1117/12.2545032
The article aimed to provide a sort of new education process including virtual reality based application. At present, in accordance with the established ways of archaeological research, archaeologists are forced to transfer the found samples for long-term storage. In such notation, there is a challenging issue to create a virtual museum with deepening experience user interaction. The modern approaches of the virtual reality were implemented by applying technologies such as the Unreal Engine (UE) and Leap Motion (LM). In the paper, we give the scheme of the implemented development workflow. The ability of interaction with objects using the interface and hand gestures on LM on UE was given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131017 (2020) https://doi.org/10.1117/12.2545154
Colour coding is known to have a significant impact on navigation tasks in the real world by showing the way where to go or by highlighting certain regions in a building where specific activities are performed. Colour coding is frequently used in large buildings, such as hospitals, to help patients navigate and locate clinics. However, it is still not well known how colour coding works in virtual reality systems, or its influence on tasks such as navigation and situation awareness. In this experiment, we explore the impact of colour coding on a navigation task by comparing participants’ performances in a virtual world. Five different mazes with the similar level of complexity are attributed to various schemes. In the first experiment, participants are asked to find the exit in a virtual maze without any assistance; in the second and third experiments, participants are provided with a two-dimensional (2D) map with and without a global positioning system (GPS) as a guide to find the exit; in the fourth and fifth experiments, participants are provided with a 2D map with the selected colours embedded along the routes and with and without a GPS to indicate direction. The experimental results will provide evidence on how the colour-coding scheme influences user performance in a virtual world navigation task. This may have a strong influence on the future design of virtual reality training systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101C (2020) https://doi.org/10.1117/12.2554571
Since exceedingly efficient hand-held devices became readily available to the world, while not being a relatively recent topic, Augmented Reality (AR) has rapidly become one of the most prominent research subjects. These robust devices could compute copious amounts of data in a mere blink of an eye. Making it feasible to overlap computer generated, interactive, graphics over the real world images in real-time to enhance the comprehensive immersive experience of the user. In this paper, we present a novel mobile application which allows the users to explore and interact with a virtual library in their physical space using marker-less AR. Digital versions of books are represented by 3D book objects on bookcases similar to an actual library. Using an in-app gaze controller, the user’s gaze is tracked and mapped into the virtual library. This allows the users to select (via gaze) a digital version of any book and download it for their perusal. To complement the immersive user experience, a continuity is maintained using the concept of Portals while making any transition from AR to immersive VR or vice-versa, corresponding to transitioning from a "physical" to a virtual space. The use of portals makes these transitions simple and seamless for the user. The presented application was implemented using Google AR Core SDK and Unity 3D, and will serve as a handy tool to spawn a virtual library anytime and anywhere, giving the user an imminent mixed sense of being in an actual traditional library while having the digital version of any book on the go.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101E (2020) https://doi.org/10.1117/12.2543588
We propose a holographic augmented reality (AR) display by using a cylindrical holographic optical element (HOE). The cylindrical HOE is used as a wavelength-selective reflective screen. A three-dimensional (3D) object is not reconstructed on the cylindrical HOE directly, but behind it as a virtual image. Thus, it becomes possible to superimpose the 3D reconstructed image with a real-existent 3D object without physical contact: augmented reality can be achieved. Moreover, in our method, the shape of the HOE is not planar but cylindrical. Thanks to its shape, the viewing zone of our method becomes much wider than that of planar hologram, where the viewing zone is fundamentally limited to only the forward region of the planar hologram. In our method, information of the 3D object to be reconstructed is coded into a conventional planar hologram such as a spatial light modulator (SLM). Then, the wavefront modulated by the planar hologram enters the cylindrical HOE. Because the cylindrical HOE reflects the incident wavefront with a very wide diverging angle, viewers can observe the 3D object from a very wide horizontal range. For this purpose, our method utilizes the reflection on the cylindrical surface. Thus, holograms for modulating the wavefronts have to be designed considering the reflection on the cylindrical HOE properly. In our method, wavefront propagation via the cylindrical HOE is calculated based on the geometric optics. To verify our method, we performed optical experiments, and our holographic AR display was successfully demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101F (2020) https://doi.org/10.1117/12.2542593
Eye tracking is one of the essential technologies for near-eye display systems, providing natural and involuntary humanmachine interfaces for application scenes of AR, VR, and MR. In this paper, novel type of an eye-tracking device is proposed, which consists of VCSEL array and PSD for realizing a triangulation system, and operates non-video based and non-mechanical. The PSD detects spatial position of a laser beam spot emitted from a certain light source and reflected from the surface of the cornea, relating to microscopic eye orientation. While, the spatially-aligned light emitters, i.e., VCSEL array, play an important role to expand detection angle of eye movement because slight change of light-incident angle on the cornea due to the difference of light-emitter positions transforms the corresponding angle into spatial information folded on a light receiving surface of the PSD. Such operations denote detections of coarse and fine movements of the eye, respectively. A notable feature of this device is that faint light from each light emitter identifies the position of laser beam spot with the help of the lock-in detection technique. The operation principle has been evaluated numerically and experimentally, and a prototype eye-tracking module with the size enough small to be mounted on glass frames has been demonstrated. These results have suggested possibility for realizing eye tracker with battery-powered driving, ultimate low processing load and data transfer amount that are our final speculation for this device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101G (2020) https://doi.org/10.1117/12.2546369
Currently, in virtual reality simulations and laboratories much attention is focused on the development of a user interaction controller for better immersion and student experience. Of course, the visual experience is a base for the laboratory study of any students, but direct contact with the visualized environment is also important. This paper presents a virtual scalpel technique that simulates the experience of medical discipline student education using virtual reality (VR) and augmented reality (AR) laboratory software. User's hands and a pencil with a tracker marker are the main tracking system components. Due to the high sensitivity of hand sign recognition, the Leap Motion camera was chosen as the base device for tracking the interaction of the user's virtual scalpel and hand models. We used Unreal Engine to create and visualize such virtual laboratory. During the recognition "hand-pencil" system, the 3D scalpel model is activated in our VR and scene, with a collision in the proposed virtual blade area. The user manipulates such “hand-scalpel” system in VR and AR simulation process, where the collision area of blade interacts with an imitating an organic 3D object. In this paper we presented the sensitivity and efficiency of “hand-pencil” system inside the virtual scene. In addition, the comparison and application of the methodology were highlighted for VR and AR prototypes of the laboratory scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101H (2020) https://doi.org/10.1117/12.2553259
Spatial computing enables overlay of the digital world over the real world in a spatially interactive manner by merging digital light-fields, perception systems, and computing. The digital content presented by the spatial computing needs to work tandemly with real-world surroundings, and more importantly the human eye-brain system, which is the ultimate judge for system success. As a result, to develop a spatial computing system, it would be essential to have a proxy for the human eye-brain to calibrate and verify the performance of the spatial computing system. This paper proposes a novel camera design for such purpose which mimics human ocular anatomy and physiology in the following aspects: geometry, optical performance and ocular motor control. Specifically, the proposed camera not only adopts the same corneal and pupil geometry from human eye, also the iris and pupil can be configured with multiple texture, color and diameter options. Furthermore, the resolution of eyeball camera is designed to match the acuity of typical 20/20 human vision, and focus can be dynamically adjusted from 0 to 3 diopters. Lastly, a pair of eyeball cameras are mounted independently on two hexapods to simulate the eye gaze and vergence. With the help of the eyeball cameras, both perceived virtual and real world can be calibrated and evaluated in a deterministic and quantifiable eye conditions like pupil location and gaze. Principally, the proposed eyeball camera serves as a bridge which combines all the data from spatial computing like eye tracking, 3D geometry of the digital world, display color accuracy/uniformity, and display optical quality (sharpness, contrast, etc) for a holistic view, which helps to effectively blend the virtual and real worlds together seamlessly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101K https://doi.org/10.1117/12.2566399
Computational Eyeglasses and Near-Eye Displays with Focus Cues
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101L https://doi.org/10.1117/12.2566412
Multi Focal Near Eye AR Display Architecture to Solve the Vergence-Accomodation Problem
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101M https://doi.org/10.1117/12.2566408
Laser Safety Considerations in Laser-Related Head Mounted Displays
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101N https://doi.org/10.1117/12.2566397
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101O https://doi.org/10.1117/12.2566411
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101P https://doi.org/10.1117/12.2566398
AttentivU: A Wearable Pair of EEG and EOG Glasses for Real-Time Physiological Processing
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101Q https://doi.org/10.1117/12.2563705
Unlocking the Key Technical Drivers Needed to Advance AR Consumer Wearables
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101R https://doi.org/10.1117/12.2566418
Panel: What is the Potential Market for the AR, VR Industry?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101T https://doi.org/10.1117/12.2566415
Smartglasses Versus Mixed Reality: Hardware, Use Cases, and Convergence
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101V https://doi.org/10.1117/12.2566396
Are Consumers Ready for Smart AR Glasses Mass Adoption?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113101W https://doi.org/10.1117/12.2566409
Latency Compensation for Optical See-Through AR Headsets
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131020 https://doi.org/10.1117/12.2566393
Using Augmented Reality Glasses in Multi-User Shared Experiences
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131022 https://doi.org/10.1117/12.2566395
Achieving Eye-Clean Visual Fidelity: How Eye Tracking and Digital Lens Correction Enable a Breakthrough in VR/AR HMD Picture Clarity
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131023 https://doi.org/10.1117/12.2566381
Display technologies are soon limited by pixel density and power challenges. Closely optimizing display technologies to human vision through active optics and software algorithms allows the decoupling field-of-view with resolution. Steered foveated displays will enable a significant increase in effective resolution, while simultaneously decreasing pixel count.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131024 https://doi.org/10.1117/12.2566406
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131025 https://doi.org/10.1117/12.2566597
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131027 https://doi.org/10.1117/12.2566419
Panel: How Do We Build the AR, VR World with Hardware?
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 1131029 https://doi.org/10.1117/12.2566404
High-Index Glass Substrates for Augmented Reality Displays
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102A https://doi.org/10.1117/12.2566383
Guiding and Harnessing Light: High Index Waveguides and Optical Materials Enabling AR Devices
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102C https://doi.org/10.1117/12.2566405
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102D https://doi.org/10.1117/12.2566384
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102F https://doi.org/10.1117/12.2563728
Diffractive and Reflective Waveguides: A Game of Trade-Offs
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102G https://doi.org/10.1117/12.2566420
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102H https://doi.org/10.1117/12.2566391
The success of consumer AR Smartglasses relies on having a set of elements falling into place and playing nicely together. The first piece of the puzzle is content, which largely determines the appeal of any new consumer electronics product. Two other puzzle pieces, display technology and system power efficiency, determine how the content will be presented and for how long it will be available. The final piece is the industrial design that largely decides in what environment and circumstances the new product will be used. Taking into account these considerations, Bosch recently introduced the LightDrive display technology, designed to support present and future smartglasses content and system architectures. The LightDrive offering comprises a full Retinal Projection system enabling the design of smartglasses that look and feel like regular eyewear.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102J https://doi.org/10.1117/12.2566389
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102K https://doi.org/10.1117/12.2563726
Electro-Optic Photopolymer Waveguide Technology for Compact Wide Field of View MR Glasses
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102L https://doi.org/10.1117/12.2566413
Perspectives on Microdisplay Technology in AR Waveguide Optical Solutions
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102N https://doi.org/10.1117/12.2563706
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102P https://doi.org/10.1117/12.2566600
Design Principles for Building VR: Translating Optics and Sensor Technologies into Products Consumers Love
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102S https://doi.org/10.1117/12.2566386
Lumiode: Ultra High Brightness Micro-LED Displays for AR/MR
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102T https://doi.org/10.1117/12.2566410
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102U https://doi.org/10.1117/12.2566407
JDC Paving the Way for MicroLED Microdisplays to Get to Market
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102V https://doi.org/10.1117/12.2566392
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Optical Architectures for Displays and Sensing in Augmented, Virtual, and Mixed Reality (AR, VR, MR), 113102W https://doi.org/10.1117/12.2566421
Panel: Micro-LEDs: The Hot, New, Vital Building Blocks of AR, VR
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.