19
0

Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition

Main:4 Pages
2 Figures
Bibliography:1 Pages
4 Tables
Abstract

Speech Emotion Recognition (SER) is crucial for improving human-computer interaction. Despite strides in monolingual SER, extending them to build a multilingual system remains challenging. Our goal is to train a single model capable of multilingual SER by distilling knowledge from multiple teacher models. To address this, we introduce a novel language-aware multi-teacher knowledge distillation method to advance SER in English, Finnish, and French. It leverages Wav2Vec2.0 as the foundation of monolingual teacher models and then distills their knowledge into a single multilingual student model. The student model demonstrates state-of-the-art performance, with a weighted recall of 72.9 on the English dataset and an unweighted recall of 63.4 on the Finnish dataset, surpassing fine-tuning and knowledge distillation baselines. Our method excels in improving recall for sad and neutral emotions, although it still faces challenges in recognizing anger and happiness.

View on arXiv
@article{bijoy2025_2506.08717,
  title={ Multi-Teacher Language-Aware Knowledge Distillation for Multilingual Speech Emotion Recognition },
  author={ Mehedi Hasan Bijoy and Dejan Porjazovski and Tamás Grósz and Mikko Kurimo },
  journal={arXiv preprint arXiv:2506.08717},
  year={ 2025 }
}
Comments on this paper