394

Speech Emotion Recognition using Multi-task learning and a multimodal dynamic fusion network

Interspeech (Interspeech), 2022
Abstract

Emotion Recognition (ER) aims to classify human utterances into different emotion categories. Based on early-fusion and self-attention-based multimodal interaction between text and acoustic modalities, in this paper, we propose MMER, a multimodal multitask learning approach for ER from individual utterances in isolation. Our proposed MMER leverages a multimodal dynamic fusion network that adds minimal parameters over an existing speech encoder to leverage the semantic and syntactic properties hidden in text. Experiments on the IEMOCAP benchmark show that our proposed model achieves state-of-the-art performance. In addition, strong baselines and ablation studies prove the effectiveness of our proposed approach. We make our code publicly available on GitHub.

View on arXiv
Comments on this paper