30
0

Semi-Supervised Self-Learning Enhanced Music Emotion Recognition

Yifu Sun
Xulong Zhang
Monan Zhou
Wei Li
Abstract

Music emotion recognition (MER) aims to identify the emotions conveyed in a given musical piece. However, currently, in the field of MER, the available public datasets have limited sample sizes. Recently, segment-based methods for emotion-related tasks have been proposed, which train backbone networks on shorter segments instead of entire audio clips, thereby naturally augmenting training samples without requiring additional resources. Then, the predicted segment-level results are aggregated to obtain the entire song prediction. The most commonly used method is that the segment inherits the label of the clip containing it, but music emotion is not constant during the whole clip. Doing so will introduce label noise and make the training easy to overfit. To handle the noisy label issue, we propose a semi-supervised self-learning (SSSL) method, which can differentiate between samples with correct and incorrect labels in a self-learning manner, thus effectively utilizing the augmented segment-level data. Experiments on three public emotional datasets demonstrate that the proposed method can achieve better or comparable performance.

View on arXiv
@article{sun2025_2410.21897,
  title={ Semi-Supervised Self-Learning Enhanced Music Emotion Recognition },
  author={ Yifu Sun and Xulong Zhang and Monan Zhou and Wei Li },
  journal={arXiv preprint arXiv:2410.21897},
  year={ 2025 }
}
Comments on this paper