81
0

Voice Activity Projection Model with Multimodal Encoders

Main:3 Pages
2 Figures
Bibliography:2 Pages
1 Tables
Abstract

Turn-taking management is crucial for any social interaction. Still, it is challenging to model human-machine interaction due to the complexity of the social context and its multimodal nature. Unlike conventional systems based on silence duration, previous existing voice activity projection (VAP) models successfully utilized a unified representation of turn-taking behaviors as prediction targets, which improved turn-taking prediction performance. Recently, a multimodal VAP model outperformed the previous state-of-the-art model by a significant margin. In this paper, we propose a multimodal model enhanced with pre-trained audio and face encoders to improve performance by capturing subtle expressions. Our model performed competitively, and in some cases, even better than state-of-the-art models on turn-taking metrics. All the source codes and pretrained models are available atthis https URL.

View on arXiv
@article{saga2025_2506.03980,
  title={ Voice Activity Projection Model with Multimodal Encoders },
  author={ Takeshi Saga and Catherine Pelachaud },
  journal={arXiv preprint arXiv:2506.03980},
  year={ 2025 }
}
Comments on this paper