Audio-Visual Speaker Tracking: Progress, Challenges, and Future Directions

Audio-visual speaker tracking has drawn increasing attention over the past few years due to its academic values and wide applications. Audio and visual modalities can provide complementary information for localization and tracking. With audio and visual information, the Bayesian-based filter and deep learning-based methods can solve the problem of data association, audio-visual fusion and track management. In this paper, we conduct a comprehensive overview of audio-visual speaker tracking. To our knowledge, this is the first extensive survey over the past five years. We introduce the family of Bayesian filters and summarize the methods for obtaining audio-visual measurements. In addition, the existing trackers and their performance on the AV16.3 dataset are summarized. In the past few years, deep learning techniques have thrived, which also boost the development of audio-visual speaker tracking. The influence of deep learning techniques in terms of measurement extraction and state estimation is also discussed. Finally, we discuss the connections between audio-visual speaker tracking and other areas such as speech separation and distributed speaker tracking.
View on arXiv@article{zhao2025_2310.14778, title={ Audio-Visual Speaker Tracking: Progress, Challenges, and Future Directions }, author={ Jinzheng Zhao and Yong Xu and Xinyuan Qian and Davide Berghi and Peipei Wu and Meng Cui and Jianyuan Sun and Philip J.B. Jackson and Wenwu Wang }, journal={arXiv preprint arXiv:2310.14778}, year={ 2025 } }