Video super-resolution (VSR) aims to generate high-resolution (HR) video by exploiting temporal consistency and contextual similarity of low-resolution (LR) video sequences. The key to improving the quality of VSR lies in accurate frame alignment and the feature fusion of adjacent frames. We propose a dual channel attention deep and shallow super-resolution network, which combines with HR optical flow compensation to construct an end-to-end VSR framework HOFADS-VSR (attention deep and shallow VSR network union HR optical flow compensation). HR optical flow calculated by spatiotemporal dependency of consecutive LR frames is used to compensate adjacent frames to implement accurate frame alignment. Deep and shallow channels with attention residual block restore small-scale detail features and large-scale contour features, respectively, and strengthen the rich features of global and local regions through weight adjustment. Extensive experiments have been performed to demonstrate the effectiveness and robustness of HOFADS-VSR. Comparative results on the Vid4, SPMC-12, and Harmonic-8 datasets show that our network not only achieves good performance on peak signal-to-noise ratio and structural similarity index but also the restored structure and texture have excellent fidelity. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Optical flow
Video
Lawrencium
Super resolution
Optical networks
Video acceleration
Optical engineering