ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23762
34
1

UniSep: Universal Target Audio Separation with Language Models at Scale

31 March 2025
Y. Wang
Hangting Chen
Dongchao Yang
Weiqin Li
Dan Luo
Guangzhi Li
Shan Yang
Zhiyong Wu
H. Meng
Xixin Wu
    VLM
ArXivPDFHTML
Abstract

We propose Universal target audio Separation (UniSep), addressing the separation task on arbitrary mixtures of different types of audio. Distinguished from previous studies, UniSep is performed on unlimited source domains and unlimited source numbers. We formulate the separation task as a sequence-to-sequence problem, and a large language model (LLM) is used to model the audio sequence in the discrete latent space, leveraging the power of LLM in handling complex mixture audios with large-scale data. Moreover, a novel pre-training strategy is proposed to utilize audio-only data, which reduces the efforts of large-scale data simulation and enhances the ability of LLMs to understand the consistency and correlation of information within audio sequences. We also demonstrate the effectiveness of scaling datasets in an audio separation task: we use large-scale data (36.5k hours), including speech, music, and sound, to train a universal target audio separation model that is not limited to a specific domain. Experiments show that UniSep achieves competitive subjective and objective evaluation results compared with single-task models.

View on arXiv
@article{wang2025_2503.23762,
  title={ UniSep: Universal Target Audio Separation with Language Models at Scale },
  author={ Yuanyuan Wang and Hangting Chen and Dongchao Yang and Weiqin Li and Dan Luo and Guangzhi Li and Shan Yang and Zhiyong Wu and Helen Meng and Xixin Wu },
  journal={arXiv preprint arXiv:2503.23762},
  year={ 2025 }
}
Comments on this paper