Baseline Systems and Evaluation Metrics for Spatial Semantic Segmentation of Sound Scenes

Immersive communication has made significant advancements, especially with the release of the codec for Immersive Voice and Audio Services. Aiming at its further realization, the DCASE 2025 Challenge has recently introduced a task for spatial semantic segmentation of sound scenes (S5), which focuses on detecting and separating sound events in spatial sound scenes. In this paper, we explore methods for addressing the S5 task. Specifically, we present baseline S5 systems that combine audio tagging (AT) and label-queried source separation (LSS) models. We investigate two LSS approaches based on the ResUNet architecture: a) extracting a single source for each detected event and b) querying multiple sources concurrently. Since each separated source in S5 is identified by its sound event class label, we propose new class-aware metrics to evaluate both the sound sources and labels simultaneously. Experimental results on first-order ambisonics spatial audio demonstrate the effectiveness of the proposed systems and confirm the efficacy of the metrics.
View on arXiv@article{nguyen2025_2503.22088, title={ Baseline Systems and Evaluation Metrics for Spatial Semantic Segmentation of Sound Scenes }, author={ Binh Thien Nguyen and Masahiro Yasuda and Daiki Takeuchi and Daisuke Niizumi and Yasunori Ohishi and Noboru Harada }, journal={arXiv preprint arXiv:2503.22088}, year={ 2025 } }