10 April 2018 Deep supervised dictionary learning for no-reference image quality assessment
Yuge Huang, Xuesong Liu, Xiang Tian, Fan Zhou, Yaowu Chen, Rongxin Jiang
Author Affiliations +
Abstract
We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.
© 2018 SPIE and IS&T 1017-9909/2018/$25.00 © 2018 SPIE and IS&T
Yuge Huang, Xuesong Liu, Xiang Tian, Fan Zhou, Yaowu Chen, and Rongxin Jiang "Deep supervised dictionary learning for no-reference image quality assessment," Journal of Electronic Imaging 27(2), 023020 (10 April 2018). https://doi.org/10.1117/1.JEI.27.2.023020
Received: 18 August 2017; Accepted: 16 March 2018; Published: 10 April 2018
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image quality

Associative arrays

Databases

Distortion

Computer programming

Feature extraction

Data modeling

RELATED CONTENT

Towards real-time image quality assessment
Proceedings of SPIE (February 02 2011)

Back to Top