Paper
26 September 2013 Temporally coherent 4D video segmentation for teleconferencing
Jana Ehmann, Onur G. Guleryuz
Author Affiliations +
Abstract
We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jana Ehmann and Onur G. Guleryuz "Temporally coherent 4D video segmentation for teleconferencing", Proc. SPIE 8856, Applications of Digital Image Processing XXXVI, 88560L (26 September 2013); https://doi.org/10.1117/12.2031740
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
RGB color model

Image segmentation

Video

Sensors

Video surveillance

Detection and tracking algorithms

Temporal coherence

RELATED CONTENT


Back to Top