We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
SAST: a suppressing ambiguity self-training framework for facial expression recognition.
- Authors
Guo, Zhe; Wei, Bingxin; Liu, Xuewen; Zhang, Zhibo; Liu, Shiya; Fan, Yangyu
- Abstract
Facial expression recognition (FER) suffers from insufficient label information, as human expressions are complex and diverse, with many expressions ambiguous. Using low-quality labels or low-quantity labels will aggravate ambiguity of model predictions and reduce the accuracy of FER. How to improve the robustness of FER to ambiguous data with insufficient information remains challenging. To this end, we propose the Suppressing Ambiguity Self-Training (SAST) framework which is the first attempt to address the problem of insufficient information both label quality and label quantity containing, simultaneously. Specifically, we design an Ambiguous Relative Label Usage (ARLU) strategy that mixes hard labels and soft labels to alleviate the information loss problem caused by hard labels. We also enhance the robustness of the model to ambiguous data by means of Self-Training Resampling (STR). We further use the landmarks and Patch Branch (PB) to enhance the ability of suppressing ambiguity. Experiments on RAF-DB, FERPlus, SFEW, and AffectNet datasets show that our SAST outperforms 6 semi-supervised methods with fewer annotations, and achieves competitive accuracy to State-Of-The-Art (SOTA) FER methods. Our code is available at https://github.com/Liuxww/SAST.
- Subjects
FACIAL expression; AMBIGUITY; PREDICTION models
- Publication
Multimedia Tools & Applications, 2024, Vol 83, Issue 18, p56059
- ISSN
1380-7501
- Publication type
Article
- DOI
10.1007/s11042-023-17749-w