48
2

Analyzable Chain-of-Musical-Thought Prompting for High-Fidelity Music Generation

Abstract

Autoregressive (AR) models have demonstrated impressive capabilities in generating high-fidelity music. However, the conventional next-token prediction paradigm in AR models does not align with the human creative process in music composition, potentially compromising the musicality of generated samples. To overcome this limitation, we introduce MusiCoT, a novel chain-of-thought (CoT) prompting technique tailored for music generation. MusiCoT empowers the AR model to first outline an overall music structure before generating audio tokens, thereby enhancing the coherence and creativity of the resulting compositions. By leveraging the contrastive language-audio pretraining (CLAP) model, we establish a chain of "musical thoughts", making MusiCoT scalable and independent of human-labeled data, in contrast to conventional CoT methods. Moreover, MusiCoT allows for in-depth analysis of music structure, such as instrumental arrangements, and supports music referencing -- accepting variable-length audio inputs as optional style references. This innovative approach effectively addresses copying issues, positioning MusiCoT as a vital practical method for music prompting. Our experimental results indicate that MusiCoT consistently achieves superior performance across both objective and subjective metrics, producing music quality that rivals state-of-the-art generation models.Our samples are available atthis https URL.

View on arXiv
@article{lam2025_2503.19611,
  title={ Analyzable Chain-of-Musical-Thought Prompting for High-Fidelity Music Generation },
  author={ Max W. Y. Lam and Yijin Xing and Weiya You and Jingcheng Wu and Zongyu Yin and Fuqiang Jiang and Hangyu Liu and Feng Liu and Xingda Li and Wei-Tsung Lu and Hanyu Chen and Tong Feng and Tianwei Zhao and Chien-Hung Liu and Xuchen Song and Yang Li and Yahui Zhou },
  journal={arXiv preprint arXiv:2503.19611},
  year={ 2025 }
}
Comments on this paper