ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.03314
36
0

Mamba-Diffusion Model with Learnable Wavelet for Controllable Symbolic Music Generation

6 May 2025
Jincheng Zhang
Gyorgy Fazekas
C. Saitis
ArXivPDFHTML
Abstract

The recent surge in the popularity of diffusion models for image synthesis has attracted new attention to their potential for generation tasks in other domains. However, their applications to symbolic music generation remain largely under-explored because symbolic music is typically represented as sequences of discrete events and standard diffusion models are not well-suited for discrete data. We represent symbolic music as image-like pianorolls, facilitating the use of diffusion models for the generation of symbolic music. Moreover, this study introduces a novel diffusion model that incorporates our proposed Transformer-Mamba block and learnable wavelet transform. Classifier-free guidance is utilised to generate symbolic music with target chords. Our evaluation shows that our method achieves compelling results in terms of music quality and controllability, outperforming the strong baseline in pianoroll generation. Our code is available atthis https URL.

View on arXiv
@article{zhang2025_2505.03314,
  title={ Mamba-Diffusion Model with Learnable Wavelet for Controllable Symbolic Music Generation },
  author={ Jincheng Zhang and György Fazekas and Charalampos Saitis },
  journal={arXiv preprint arXiv:2505.03314},
  year={ 2025 }
}
Comments on this paper