18
2

Audio-Visual Instance Segmentation

Ruohao Guo
Yaru Chen
Yanyu Qi
Wenzhen Yue
Dantong Niu
Xianghua Ying
Yanyu Qi
Jinxing Zhou
Bowei Xing
Wenzhen Yue
Ji Shi
Qixun Wang
Peiliang Zhang
Buwen Liang
Abstract

In this paper, we propose a new multi-modal task, termed audio-visual instance segmentation (AVIS), which aims to simultaneously identify, segment and track individual sounding object instances in audible videos. To facilitate this research, we introduce a high-quality benchmark named AVISeg, containing over 90K instance masks from 26 semantic categories in 926 long videos. Additionally, we propose a strong baseline model for this task. Our model first localizes sound source within each frame, and condenses object-specific contexts into concise tokens. Then it builds long-range audio-visual dependencies between these tokens using window-based attention, and tracks sounding objects among the entire video sequences. Extensive experiments reveal that our method performs best on AVISeg, surpassing the existing methods from related tasks. We further conduct the evaluation on several multi-modal large models. Unfortunately, they exhibits subpar performance on instance-level sound source localization and temporal perception. We expect that AVIS will inspire the community towards a more comprehensive multi-modal understanding. Dataset and code is available atthis https URL.

View on arXiv
@article{guo2025_2310.18709,
  title={ Audio-Visual Instance Segmentation },
  author={ Ruohao Guo and Xianghua Ying and Yaru Chen and Dantong Niu and Guangyao Li and Liao Qu and Yanyu Qi and Jinxing Zhou and Bowei Xing and Wenzhen Yue and Ji Shi and Qixun Wang and Peiliang Zhang and Buwen Liang },
  journal={arXiv preprint arXiv:2310.18709},
  year={ 2025 }
}
Comments on this paper