ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22088
46
0

Visual Cues Support Robust Turn-taking Prediction in Noise

28 May 2025
Sam O'Connor Russell
Naomi Harte
ArXiv (abs)PDFHTML
Main:4 Pages
3 Figures
Bibliography:1 Pages
2 Tables
Abstract

Accurate predictive turn-taking models (PTTMs) are essential for naturalistic human-robot interaction. However, little is known about their performance in noise. This study therefore explores PTTM performance in types of noise likely to be encountered once deployed. Our analyses reveal PTTMs are highly sensitive to noise. Hold/shift accuracy drops from 84% in clean speech to just 52% in 10 dB music noise. Training with noisy data enables a multimodal PTTM, which includes visual features to better exploit visual cues, with 72% accuracy in 10 dB music noise. The multimodal PTTM outperforms the audio-only PTTM across all noise types and SNRs, highlighting its ability to exploit visual cues; however, this does not always generalise to new types of noise. Analysis also reveals that successful training relies on accurate transcription, limiting the use of ASR-derived transcriptions to clean conditions. We make code publicly available for future research.

View on arXiv
@article{russell2025_2505.22088,
  title={ Visual Cues Support Robust Turn-taking Prediction in Noise },
  author={ Sam O'Connor Russell and Naomi Harte },
  journal={arXiv preprint arXiv:2505.22088},
  year={ 2025 }
}
Comments on this paper