Paper
22 May 2024 Research on image character emotion recognition algorithm based on multi-level associative relationships
Zhe Zhang, Yiding Wang
Author Affiliations +
Proceedings Volume 13176, Fourth International Conference on Machine Learning and Computer Application (ICMLCA 2023); 131760M (2024) https://doi.org/10.1117/12.3028954
Event: Fourth International Conference on Machine Learning and Computer Application (ICMLCA 2023), 2023, Hangzhou, China
Abstract
With the rapid growth of mobile internet and self-media, there is an increasingly urgent need to understand the emotional states of characters in images. Traditional methods primarily depend on facial expressions to infer emotions in characters, which does not sufficiently address issues such as face occlusion or invisibility in self-media images. The algorithms proposed in some recent studies, which utilize contextual context for emotion recognition, do not reach adequate extraction and fusion of contextual cues. This paper puts forth an algorithm for emotion recognition of image characters, which is based on multilevel association relations, to tackle the aforementioned issues. The algorithm examines the association relationships of the target character at the levels of association relationship acquisition, feature extraction, and feature fusion to recognize character emotions. At the level of acquiring association relationships, the algorithm enhances the relationships between scene information through the Anti-Mask of the original image and the semantically segmented image. In addition, it augments the character's U2NET semantically segmented image to obtain association relationships for character interaction in the image. At the feature extraction level, the algorithm inputs the image of the target character and each related image to separate networks for feature extraction. At the feature fusion level, the algorithm uses the Split Attention module to replace the fully connected layer for deep feature fusion. This method effectively leverages the associative relationships in self-media images, enhancing the accuracy of emotion recognition in such images. The experimental results indicate that the average accuracy on the emotion dataset EMOTIC has improved by 3.64 percentage points compared to the baseline algorithm, reaching 31.02%.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Zhe Zhang and Yiding Wang "Research on image character emotion recognition algorithm based on multi-level associative relationships", Proc. SPIE 13176, Fourth International Conference on Machine Learning and Computer Application (ICMLCA 2023), 131760M (22 May 2024); https://doi.org/10.1117/12.3028954
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Emotion

Image segmentation

Image fusion

Detection and tracking algorithms

Feature extraction

Feature fusion

Image processing

Back to Top