Video super-resolution (VSR) aims to generate high-resolution (HR) video by exploiting temporal consistency and contextual similarity of low-resolution (LR) video sequences. The key to improving the quality of VSR lies in accurate frame alignment and the feature fusion of adjacent frames. We propose a dual channel attention deep and shallow super-resolution network, which combines with HR optical flow compensation to construct an end-to-end VSR framework HOFADS-VSR (attention deep and shallow VSR network union HR optical flow compensation). HR optical flow calculated by spatiotemporal dependency of consecutive LR frames is used to compensate adjacent frames to implement accurate frame alignment. Deep and shallow channels with attention residual block restore small-scale detail features and large-scale contour features, respectively, and strengthen the rich features of global and local regions through weight adjustment. Extensive experiments have been performed to demonstrate the effectiveness and robustness of HOFADS-VSR. Comparative results on the Vid4, SPMC-12, and Harmonic-8 datasets show that our network not only achieves good performance on peak signal-to-noise ratio and structural similarity index but also the restored structure and texture have excellent fidelity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.