LZMidi: Compression-Based Symbolic Music Generation

Recent advances in symbolic music generation primarily rely on deep learning models such as Transformers, GANs, and diffusion models. While these approaches achieve high-quality results, they require substantial computational resources, limiting their scalability. We introduce LZMidi, a lightweight symbolic music generation framework based on a Lempel-Ziv (LZ78)-induced sequential probability assignment (SPA). By leveraging the discrete and sequential structure of MIDI data, our approach enables efficient music generation on standard CPUs with minimal training and inference costs. Theoretically, we establish universal convergence guarantees for our approach, underscoring its reliability and robustness. Compared to state-of-the-art diffusion models, LZMidi achieves competitive Frechet Audio Distance (FAD), Wasserstein Distance (WD), and Kullback-Leibler (KL) scores, while significantly reducing computational overhead - up to 30x faster training and 300x faster generation. Our results position LZMidi as a significant advancement in compression-based learning, highlighting how universal compression techniques can efficiently model and generate structured sequential data, such as symbolic music, with practical scalability and theoretical rigor.
View on arXiv@article{ding2025_2503.17654, title={ LZMidi: Compression-Based Symbolic Music Generation }, author={ Connor Ding and Abhiram Gorle and Sagnik Bhattacharya and Divija Hasteer and Naomi Sagan and Tsachy Weissman }, journal={arXiv preprint arXiv:2503.17654}, year={ 2025 } }