Paper
19 May 2011 Human body tracking using LMS-VSMM from monocular video sequences
Hong Han, Zhichao Chen, LC Jiao, Youjian Fan
Author Affiliations +
Abstract
A new model-based human body tracking framework with learned-based theory is proposed in this paper. This framework introduces a likely model set-variable structure multiple models (LMS-VSMM) to track articulated human motion in monocular images sequences. The key joint points are selected as image feature, which are detected automatically and the undetected points are estimated with Particle filters, multiple motion models are learned from CMU motion capture database with ridge regression method to direct tracking. In tracking, motion models currently in effect switches from one to another in order to match the present human motion mode. The motion model is activated according to the change in projection angle of kinematic chain, and topological and compatibility relationship among them. It is terminated according to their model probabilities. And likely model set schemes of VSMM is used to estimate the quaternion vectors of joints rotation. Experiments using two videos demonstrate this tracking framework is efficient with respect to 3D pose and 2D projection.
© (2011) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Hong Han, Zhichao Chen, LC Jiao, and Youjian Fan "Human body tracking using LMS-VSMM from monocular video sequences", Proc. SPIE 8049, Automatic Target Recognition XXI, 80490K (19 May 2011); https://doi.org/10.1117/12.883604
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Motion models

3D modeling

Video

Data modeling

Particle filters

Particles

Kinematics

RELATED CONTENT


Back to Top