ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.06393
40
10

MuPT: A Generative Symbolic Music Pretrained Transformer

9 April 2024
Xingwei Qu
Yuelin Bai
Yi Ma
Ziya Zhou
Ka Man Lo
Jiaheng Liu
Ruibin Yuan
Lejun Min
Xueling Liu
Tianyu Zhang
Xinrun Du
Shuyue Guo
Yiming Liang
Yizhi Li
Shangda Wu
Junting Zhou
Tianyu Zheng
Ziyang Ma
Fengze Han
Wei Xue
Gus Xia
Emmanouil Benetos
Xiang Yue
Chenghua Lin
Xu Tan
Stephen W. Huang
Wenhu Chen
Jie Fu
Ge Zhang
ArXivPDFHTML
Abstract

In this paper, we explore the application of Large Language Models (LLMs) to the pre-training of music. While the prevalent use of MIDI in music modeling is well-established, our findings suggest that LLMs are inherently more compatible with ABC Notation, which aligns more closely with their design and strengths, thereby enhancing the model's performance in musical composition. To address the challenges associated with misaligned measures from different tracks during generation, we propose the development of a Synchronized Multi-Track ABC Notation (SMT-ABC Notation), which aims to preserve coherence across multiple musical tracks. Our contributions include a series of models capable of handling up to 8192 tokens, covering 90% of the symbolic music data in our training set. Furthermore, we explore the implications of the Symbolic Music Scaling Law (SMS Law) on model performance. The results indicate a promising direction for future research in music generation, offering extensive resources for community-led research through our open-source contributions.

View on arXiv
Comments on this paper