273
v1v2v3v4 (latest)

Neural Audio Codecs for Prompt-Driven Universal Sound Separation

Main:10 Pages
4 Figures
Bibliography:4 Pages
11 Tables
Appendix:15 Pages
Abstract

Text-guided sound separation supports flexible audio editing across media and assistive applications, but existing models like AudioSep are too compute-heavy for edge deployment. Neural audio codec (NAC) models such as CodecFormer and SDCodec are compute-efficient but limited to fixed-class separation. We introduce CodecSep, the first NAC-based model for on-device universal, text-driven separation. CodecSep combines DAC compression with a Transformer masker modulated by CLAP-derived FiLM parameters. Across six open-domain benchmarks under matched training/prompt protocols, \textbf{CodecSep} surpasses \textbf{AudioSep} in separation fidelity (SI-SDR) while remaining competitive in perceptual quality (ViSQOL) and matching or exceeding fixed-stem baselines (TDANet, CodecFormer, SDCodec). In code-stream deployments, it needs just 1.35~GMACs end-to-end -- approximately 54×54\times less compute (25×25\times architecture-only) than spectrogram-domain separators like AudioSep -- while remaining fully bitstream-compatible.

View on arXiv
Comments on this paper