228
3
v1v2v3 (latest)

MERGE -- A Bimodal Audio-Lyrics Dataset for Static Music Emotion Recognition

Main:15 Pages
2 Figures
Bibliography:2 Pages
9 Tables
Appendix:1 Pages
Abstract

The Music Emotion Recognition (MER) field has seen steady developments in recent years, with contributions from feature engineering, machine learning, and deep learning. The landscape has also shifted from audio-centric systems to bimodal ensembles that combine audio and lyrics. However, a lack of public, sizable and quality-controlled bimodal databases has hampered the development and improvement of bimodal audio-lyrics systems. This article proposes three new audio, lyrics, and bimodal MER research datasets, collectively referred to as MERGE, which were created using a semi-automatic approach. To comprehensively assess the proposed datasets and establish a baseline for benchmarking, we conducted several experiments for each modality, using feature engineering, machine learning, and deep learning methodologies. Additionally, we propose and validate fixed train-validation-test splits. The obtained results confirm the viability of the proposed datasets, achieving the best overall result of 81.74\% F1-score for bimodal classification.

View on arXiv
@article{louro2025_2407.06060,
  title={ MERGE -- A Bimodal Audio-Lyrics Dataset for Static Music Emotion Recognition },
  author={ Pedro Lima Louro and Hugo Redinho and Ricardo Santos and Ricardo Malheiro and Renato Panda and Rui Pedro Paiva },
  journal={arXiv preprint arXiv:2407.06060},
  year={ 2025 }
}
Comments on this paper