232

A Two-student Learning Framework for Mixed Supervised Target Sound Detection

Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), 2022
Dongchao Yang
Helin Wang
Abstract

Target sound detection (TSD) aims to detect the target sound from mixture audio given the reference information. Previous work shows that a good detection performance relies on fully-annotated data. However, collecting fully-annotated data is labor-extensive. Therefore, we consider TSD with mixed supervision, which learns novel categories (target domain) using weak annotations with the help of full annotations of existing base categories (source domain). We propose a novel two-student learning framework, which contains two mutual helping student models (s_student\mathit{s\_student} and w_student\mathit{w\_student}) that learn from fully- and weakly-annotated datasets, respectively. Specifically, we first propose a frame-level knowledge distillation strategy to transfer the class-agnostic knowledge from s_student\mathit{s\_student} to w_student\mathit{w\_student}. After that, a pseudo supervised (PS) training is designed to transfer the knowledge from w_student\mathit{w\_student} to s_student\mathit{s\_student}. Lastly, an adversarial training strategy is proposed, which aims to align the data distribution between source and target domains. To evaluate our method, we build three TSD datasets based on UrbanSound and Audioset. Experimental results show that our methods offer about 8\% improvement in event-based F score.

View on arXiv
Comments on this paper