28
0

QA-MDT: Quality-aware Masked Diffusion Transformer for Enhanced Music Generation

Abstract

Text-to-music (TTM) generation, which converts textual descriptions into audio, opens up innovative avenues for multimedia creation. Achieving high quality and diversity in this process demands extensive, high-quality data, which are often scarce in available datasets. Most open-source datasets frequently suffer from issues like low-quality waveforms and low text-audio consistency, hindering the advancement of music generation models. To address these challenges, we propose a novel quality-aware training paradigm for generating high-quality, high-musicality music from large-scale, quality-imbalanced datasets. Additionally, by leveraging unique properties in the latent space of musical signals, we adapt and implement a masked diffusion transformer (MDT) model for the TTM task, showcasing its capacity for quality control and enhanced musicality. Furthermore, we introduce a three-stage caption refinement approach to address low-quality captions' issue. Experiments show state-of-the-art (SOTA) performance on benchmark datasets including MusicCaps and the Song-Describer Dataset with both objective and subjective metrics. Demo audio samples are available atthis https URL, code and pretrained checkpoints are open-sourced atthis https URL.

View on arXiv
@article{li2025_2405.15863,
  title={ QA-MDT: Quality-aware Masked Diffusion Transformer for Enhanced Music Generation },
  author={ Chang Li and Ruoyu Wang and Lijuan Liu and Jun Du and Yixuan Sun and Zilu Guo and Zhenrong Zhang and Yuan Jiang and Jianqing Gao and Feng Ma },
  journal={arXiv preprint arXiv:2405.15863},
  year={ 2025 }
}
Comments on this paper