5 February 2019 Revisiting image ordinal estimation: how to deal with ordinal relationship in deep learning?
Chao Zhang, Ce Zhu, Xun Xu
Author Affiliations +
Abstract
Image ordinal estimation usually defined as a regression or classification problem is challenging in computer vision. Many recent works simply implement deep convolutional neural networks (CNNs) to get ideal results, while they have not analyzed why it works well as conventional computer vision methods. We revisit the problem of image ordinal estimation and propose two methods on ordinal estimation. First, a double-task convolutional neural network model (DTCNN) is proposed by combining classification and regression objectives. Then we have analyzed why DTCNN works well from three perspectives: (1) the ordinal relationship between two tasks, (2) the coarse/fine category level for two tasks, and (3) neurons’ activation on two tasks. To avoid choosing heuristic combination parameter, we further propose risk-based CNN (R-CNN) model to embed the ordinal constraint into classification task. Based on Bayesian risk rule, the category information and ordinal information are fused by a dynamic weighting loss function. Experimental results on two ordinal datasets show that both DTCNN and R-CNN obtain competitive performance.
© 2019 SPIE and IS&T 1017-9909/2019/$25.00 © 2019 SPIE and IS&T
Chao Zhang, Ce Zhu, and Xun Xu "Revisiting image ordinal estimation: how to deal with ordinal relationship in deep learning?," Journal of Electronic Imaging 28(1), 013025 (5 February 2019). https://doi.org/10.1117/1.JEI.28.1.013025
Received: 19 October 2018; Accepted: 9 January 2019; Published: 5 February 2019
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image analysis

Neurons

Quantization

Image classification

Data modeling

Convolutional neural networks

Feature extraction

Back to Top