In this paper, a novel co-saliency model is proposed with the aim to detect co-salient objects in multiple videos. On the basis of superpixel segmentation results, we fuse the temporal saliency and spatial saliency with a superpixel-level object prior to generate the intra saliency map for each video frame. Then the video-level global object/background histogram is calculated for each video based on the adaptive thresholding results of intra saliency maps, and the seed saliency maps are generated by using similarity measures between superpixels and the global object/background histogram. Finally, the co-saliency maps are generated by the recovery process from the seed saliency measures to all regions in each video frame. Experimental results on a public video dataset show that the proposed video co-saliency model consistently outperforms the state-of-the-art video saliency model and image co-saliency models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.