Paper
20 June 2014 Activity recognition using Video Event Segmentation with Text (VEST)
Hillary Holloway, Eric K. Jones, Andrew Kaluzniacki, Erik Blasch, Jorge Tierno
Author Affiliations +
Abstract
Multi-Intelligence (multi-INT) data includes video, text, and signals that require analysis by operators. Analysis methods include information fusion approaches such as filtering, correlation, and association. In this paper, we discuss the Video Event Segmentation with Text (VEST) method, which provides event boundaries of an activity to compile related message and video clips for future interest. VEST infers meaningful activities by clustering multiple streams of time-sequenced multi-INT intelligence data and derived fusion products. We discuss exemplar results that segment raw full-motion video (FMV) data by using extracted commentary message timestamps, FMV metadata, and user-defined queries.
© (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Hillary Holloway, Eric K. Jones, Andrew Kaluzniacki, Erik Blasch, and Jorge Tierno "Activity recognition using Video Event Segmentation with Text (VEST)", Proc. SPIE 9091, Signal Processing, Sensor/Information Fusion, and Target Recognition XXIII, 90910O (20 June 2014); https://doi.org/10.1117/12.2050413
Lens.org Logo
CITATIONS
Cited by 7 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Video surveillance

Analytical research

Gold

Cameras

Information fusion

Image segmentation

Back to Top