227

SAM Audio: Segment Anything in Audio

Bowen Shi
Andros Tjandra
John Hoffman
Helin Wang
Yi-Chiao Wu
Luya Gao
Julius Richter
Matt Le
Apoorv Vyas
Sanyuan Chen
Christoph Feichtenhofer
Piotr Dollár
Wei-Ning Hsu
Ann Lee
Abstract

General audio source separation is a key capability for multimodal AI systems that can perceive and reason about sound. Despite substantial progress in recent years, existing separation models are either domain-specific, designed for fixed categories such as speech or music, or limited in controllability, supporting only a single prompting modality such as text. In this work, we present SAM Audio, a foundation model for general audio separation that unifies text, visual, and temporal span prompting within a single framework. Built on a diffusion transformer architecture, SAM Audio is trained with flow matching on large-scale audio data spanning speech, music, and general sounds, and can flexibly separate target sources described by language, visual masks, or temporal spans. The model achieves state-of-the-art performance across a diverse suite of benchmarks, including general sound, speech, music, and musical instrument separation in both in-the-wild and professionally produced audios, substantially outperforming prior general-purpose and specialized systems. Furthermore, we introduce a new real-world separation benchmark with human-labeled multimodal prompts and a reference-free evaluation model that correlates strongly with human judgment.

View on arXiv
Main:31 Pages
7 Figures
Bibliography:9 Pages
22 Tables
Appendix:17 Pages
Comments on this paper