269

Enhancing Multimodal Emotion Recognition through Multi-Granularity Cross-Modal Alignment

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
Main:4 Pages
3 Figures
Bibliography:1 Pages
4 Tables
Abstract

Multimodal emotion recognition (MER), leveraging speech and text, has emerged as a pivotal domain within human-computer interaction, demanding sophisticated methods for effective multimodal integration. The challenge of aligning features across these modalities is significant, with most existing approaches adopting a singular alignment strategy. Such a narrow focus not only limits model performance but also fails to address the complexity and ambiguity inherent in emotional expressions. In response, this paper introduces a Multi-Granularity Cross-Modal Alignment (MGCMA) framework, distinguished by its comprehensive approach encompassing distribution-based, instance-based, and token-based alignment modules. This framework enables a multi-level perception of emotional information across modalities. Our experiments on IEMOCAP demonstrate that our proposed method outperforms current state-of-the-art techniques.

View on arXiv
Comments on this paper