In the perspective of alleviating the inherent trade-off between the spatial and angular resolutions of light field (LF) images, much research has been carried out to increase the angular resolution of LFs by synthesizing intermediate views. Since the height of each EPI is equal to the angular resolution of LF, we tackle the view synthesis problem as doubling the height of each EPI in LF. To efficiently stretch the EPI while not consuming too much computing time, we propose to first segment the EPI into superpixels and then adaptively interpolate each superpixel separately. The test results on the synthetic and real-scene LF datasets show that our scheme can achieve average Peak signal-to-noise ratio (PSNR) / structural similarity index measure (SSIM) around 30.58dB / 0.9131 and 32.28dB / 0.9510, by taking computing time of 5.80 minute and 1.83 minute for HCI and EPFL dataset, respectively.
Over the years, many methods have emerged to solve the super-resolution problem of light field images, and among them, those methods based on deep learning are noted quite attractive recently. Although the features extracted from epipolar domain for the super-resolution of light field images are actively investigated due to their potential capability of well capturing the relationship between spatial and angular domains, we note that spatial features are still the most important foundation in feature extraction. In this paper, we design a network, named as LFSelectSR, employing multiple convolutional kernels to fully extract spatial features and introduce a dynamic selection mechanism that can extract the most valuable spatial features. By training and testing the network using well-known datasets, we demonstrate its excellent performance of achieving the level of state-of-the-arts under certain conditions.
In order to alleviate the inherent trade-off relation between spatial and angular resolutions of light field (LF) images, many experiments have been carried out to enhance the angular resolution of LFs by creating novel views. In this paper, we investigate a method to enhance the angular resolution of LF image by first grouping the pixels within and across the multiple views into LF superpixels using existing LF segmentation method, then generating novel views by shifting and overlaying the layers containing the LF superpixels having similar per-view disparity values. Experimental results with synthetic and real-scene datasets show that our method achieves good quality of reconstruction.
Light field (LF) image is captured by plenoptic cameras which suffers from trade-off between spatial and angular resolutions. Numerous methods have been proposed to enhance the spatial resolution of images captured by LF cameras. Among the state-of-the-art methods, there is an approach to super resolve LF images using the graph-based regularization. However, it has a problem of taking too much time for execution. In this paper, we propose a method to simplify the process in computing graph. The experimental results show that our proposed method can reduce up to 18% of time complexity compared to the original approach while maintaining the image quality of LF images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.