Presentation + Paper
4 October 2023 Explanation of face recognition via saliency maps
Author Affiliations +
Abstract
Despite the significant progress in recent years, deep face recognition is often treated as a “black box” and has been criticized for lacking explainability. It becomes increasingly important to understand the characteristics and decisions of deep face recognition systems to make them more acceptable to the public. Explainable face recognition (XFR) refers to the problem of interpreting why a recognition model matches a probe face with one identity over others. Recent studies have explored use of visual saliency maps as an explanation mechanism, but they often lack a deeper analysis in the context of face recognition. This paper starts by proposing a rigorous definition of explainable face recognition (XFR) which focuses on the decision-making process of the deep recognition model. Based on that definition, a similarity-based RISE algorithm (S-RISE) is then introduced to produce high-quality visual saliency maps for a deep face recognition model. Furthermore, an evaluation approach is proposed to systematically validate the reliability and accuracy of general visual saliency-based XFR methods.
Conference Presentation
(2023) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Yuhang Lu and Touradj Ebrahimi "Explanation of face recognition via saliency maps", Proc. SPIE 12674, Applications of Digital Image Processing XLVI, 126740U (4 October 2023); https://doi.org/10.1117/12.2677353
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Facial recognition systems

Visualization

Detection and tracking algorithms

Visual process modeling

Data modeling

Image processing

Education and training

Back to Top