UWAV: Uncertainty-weighted Weakly-supervised Audio-Visual Video Parsing

Audio-Visual Video Parsing (AVVP) entails the challenging task of localizing both uni-modal events (i.e., those occurring exclusively in either the visual or acoustic modality of a video) and multi-modal events (i.e., those occurring in both modalities concurrently). Moreover, the prohibitive cost of annotating training data with the class labels of all these events, along with their start and end times, imposes constraints on the scalability of AVVP techniques unless they can be trained in a weakly-supervised setting, where only modality-agnostic, video-level labels are available in the training data. To this end, recently proposed approaches seek to generate segment-level pseudo-labels to better guide model training. However, the absence of inter-segment dependencies when generating these pseudo-labels and the general bias towards predicting labels that are absent in a segment limit their performance. This work proposes a novel approach towards overcoming these weaknesses called Uncertainty-weighted Weakly-supervised Audio-visual Video Parsing (UWAV). Additionally, our innovative approach factors in the uncertainty associated with these estimated pseudo-labels and incorporates a feature mixup based training regularization for improved training. Empirical results show that UWAV outperforms state-of-the-art methods for the AVVP task on multiple metrics, across two different datasets, attesting to its effectiveness and generalizability.
View on arXiv@article{lai2025_2505.09615, title={ UWAV: Uncertainty-weighted Weakly-supervised Audio-Visual Video Parsing }, author={ Yung-Hsuan Lai and Janek Ebbers and Yu-Chiang Frank Wang and François Germain and Michael Jeffrey Jones and Moitreya Chatterjee }, journal={arXiv preprint arXiv:2505.09615}, year={ 2025 } }