Presentation + Paper
28 September 2016 Deep RNNs for video denoising
Author Affiliations +
Abstract
Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.
Conference Presentation
© (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Xinyuan Chen, Li Song, and Xiaokang Yang "Deep RNNs for video denoising", Proc. SPIE 9971, Applications of Digital Image Processing XXXIX, 99711T (28 September 2016); https://doi.org/10.1117/12.2239260
Lens.org Logo
CITATIONS
Cited by 40 scholarly publications and 3 patents.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Denoising

Neural networks

Video processing

Motion models

Video acceleration

Associative arrays

RELATED CONTENT


Back to Top