57
1

Language-Queried Target Sound Extraction Without Parallel Training Data

Abstract

Language-queried target sound extraction (TSE) aims to extract specific sounds from mixtures based on language queries. Traditional fully-supervised training schemes require extensively annotated parallel audio-text data, which are labor-intensive. We introduce a parallel-data-free training scheme, requiring only unlabelled audio clips for TSE model training by utilizing the contrastive language-audio pre-trained model (CLAP). In a vanilla parallel-data-free training stage, target audio is encoded using the pre-trained CLAP audio encoder to form a condition embedding, while during testing, user language queries are encoded by CLAP text encoder as the condition embedding. This vanilla approach assumes perfect alignment between text and audio embeddings, which is unrealistic. Two major challenges arise from training-testing mismatch: the persistent modality gap between text and audio and the risk of overfitting due to the exposure of rich acoustic details in target audio embedding during training. To address this, we propose a retrieval-augmented strategy. Specifically, we create an embedding cache using audio captions generated by a large language model (LLM). During training, target audio embeddings retrieve text embeddings from this cache to use as condition embeddings, ensuring consistent modalities between training and testing and eliminating information leakage. Extensive experiment results show that our retrieval-augmented approach achieves consistent and notable performance improvements over existing state-of-the-art with better generalizability.

View on arXiv
@article{ma2025_2409.09398,
  title={ Language-Queried Target Sound Extraction Without Parallel Training Data },
  author={ Hao Ma and Zhiyuan Peng and Xu Li and Yukai Li and Mingjie Shao and Qiuqiang Kong and Ju Liu },
  journal={arXiv preprint arXiv:2409.09398},
  year={ 2025 }
}
Comments on this paper