Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party Conversation Contexts
Emotion recognition and sentiment analysis are pivotal tasks in speech and language processing, particularly in real-world scenarios involving multi-party, conversational data. This paper presents a multimodal approach to tackle these challenges on a well-known dataset. We propose a system that integrates four key modalities/channels using pre-trained models: RoBERTa for text, Wav2Vec2 for speech, a proposed FacialNet for facial expressions, and a CNN+Transformer architecture trained from scratch for video analysis. Feature embeddings from each modality are concatenated to form a multimodal vector, which is then used to predict emotion and sentiment labels. The multimodal system demonstrates superior performance compared to unimodal approaches, achieving an accuracy of 66.36% for emotion recognition and 72.15% for sentiment analysis.
View on arXiv@article{farhadipour2025_2503.06805, title={ Multimodal Emotion Recognition and Sentiment Analysis in Multi-Party Conversation Contexts }, author={ Aref Farhadipour and Hossein Ranjbar and Masoumeh Chapariniya and Teodora Vukovic and Sarah Ebling and Volker Dellwo }, journal={arXiv preprint arXiv:2503.06805}, year={ 2025 } }