We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Robust Multimodal Emotion Recognition from Conversation with Transformer-Based Crossmodality Fusion.
- Authors
Xie, Baijun; Sidulova, Mariia; Park, Chung Hyuk
- Abstract
Decades of scientific research have been conducted on developing and evaluating methods for automated emotion recognition. With exponentially growing technology, there is a wide range of emerging applications that require emotional state recognition of the user. This paper investigates a robust approach for multimodal emotion recognition during a conversation. Three separate models for audio, video and text modalities are structured and fine-tuned on the MELD. In this paper, a transformer-based crossmodality fusion with the EmbraceNet architecture is employed to estimate the emotion. The proposed multimodal network architecture can achieve up to 65% accuracy, which significantly surpasses any of the unimodal models. We provide multiple evaluation techniques applied to our work to show that our model is robust and can even outperform the state-of-the-art models on the MELD.
- Subjects
EMOTION recognition; EMOTIONS; EMOTIONAL state; CONVERSATION; MULTIMODAL user interfaces
- Publication
Sensors (14248220), 2021, Vol 21, Issue 14, p4913
- ISSN
1424-8220
- Publication type
Article
- DOI
10.3390/s21144913