Recent researches on object segmentation mostly concentrate on single-view images or objects in 3D settings. In this paper, a novel method for efficient multi-view foreground object segmentation is presented, using spatial consistency across adjacent views as constraints to generate identical masks. Even though the conventional segmentation results at different views are relatively accurate, there always are inconsistent regions where the boundaries of the mask are different over the same area across multiple views. The central idea of our method is to utilize the camera parameters to guide the refocusing procedure, during which each instance across different views is refocused using multi-view projections. The refocused images are then used as the input of instance segmentation network to predict the bounding box and object mask. The final step takes the network output as the prior information for the GMMs to achieve more accurate segmentation results. While many concrete implementations of the general idea are feasible, satisfactory results can be achieved with this simple and efficient approach. Experimental results demonstrate both qualitatively and quantitatively that the proposed method outputs excellent results with less background pixels, thus allowing us to improve the 3D display quality eventually. We hope this simple and effective method can be of help to future researches in relevant tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.