Audio-FLAN: A Preliminary Release

Recent advancements in audio tokenization have significantly enhanced the integration of audio capabilities into large language models (LLMs). However, audio understanding and generation are often treated as distinct tasks, hindering the development of truly unified audio-language models. While instruction tuning has demonstrated remarkable success in improving generalization and zero-shot learning across text and vision, its application to audio remains largely unexplored. A major obstacle is the lack of comprehensive datasets that unify audio understanding and generation. To address this, we introduce Audio-FLAN, a large-scale instruction-tuning dataset covering 80 diverse tasks across speech, music, and sound domains, with over 100 million instances. Audio-FLAN lays the foundation for unified audio-language models that can seamlessly handle both understanding (e.g., transcription, comprehension) and generation (e.g., speech, music, sound) tasks across a wide range of audio domains in a zero-shot manner. The Audio-FLAN dataset is available on HuggingFace and GitHub and will be continuously updated.
View on arXiv@article{xue2025_2502.16584, title={ Audio-FLAN: A Preliminary Release }, author={ Liumeng Xue and Ziya Zhou and Jiahao Pan and Zixuan Li and Shuai Fan and Yinghao Ma and Sitong Cheng and Dongchao Yang and Haohan Guo and Yujia Xiao and Xinsheng Wang and Zixuan Shen and Chuanbo Zhu and Xinshen Zhang and Tianchi Liu and Ruibin Yuan and Zeyue Tian and Haohe Liu and Emmanouil Benetos and Ge Zhang and Yike Guo and Wei Xue }, journal={arXiv preprint arXiv:2502.16584}, year={ 2025 } }