Optical see-through (OST) head-mounted display (HMD) enables the user to see the real-world and virtual objects together. Previous OST HMD-based augmented reality applications mainly attempt to render the virtual objects that are geometrically and photometrically well-aligned with the real-world. In this paper, we show that the user's perception of the real-world can be changed by overlapping the image of the real-world with the real- world. As a case study, we apply image refocusing to the captured real-world image and display the refocused image via the OST HMD such that the user sees the real-world and refocused image together. To this end, we first perform an experiment to find the relationship between the synthetic blur of the augmented object and the perceptual blur of the real-world object. We then apply the depth-adaptive image defocus blur to the image of the real-world to create the perceptual blur of the real-world. Experimental results show that the proposed method can make special visual effects to the real-world that the user sees through the OST HMD.
A time-of-flight (ToF) depth camera can capture a depth map of the scene by measuring the phase delay between the emitted and reflected infrared (IR) light signals. In addition, an intensity map that represents the magnitude of the reflected light can be obtained by the ToF camera. If we consider the light source of the ToF camera as a flash, the intensity map can be deemed as an IR flashed image. Building on ideas from flash/no-flash photography and dark flash photography, we devise a color image enhancement framework that exploits information from the intensity and depth maps. To this end, ToF-related distortions of the intensity and depth maps are first reduced. We then restore fine details of color images captured under weak illumination by combining mutually beneficial information from the visible and IR band signals. In addition, we show that the depth map can be used to produce depth-adaptive effects such as depth-adaptive smoothing at the resultant color image.
In this paper, we present a simple but e ective hole lling algorithm for depth images acquired by time-of- ight (ToF) cameras. The proposed algorithm recovers a hole region of a depth image by taking into account contour pixels surrounding the hole region. In particular, eight contour pixels are selected and then grouped into four pairs according to the four representative directions, i.e. horizontal, vertical, and two diagonal directions. The four pairs of contour pixels are then combined via a bilateral filtering framework in which the filter coefficients are obtained by considering the photometric distance between the two depth pixels in each pair and the geometric distance between the hole pixel and the contour pixels. The experimental results demonstrate that the proposed algorithm e ectively recovers depth edges disconnected by the hole region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.