8

Back to Basics: Revisiting ASR in the Age of Voice Agents

Geeyang Tay
Wentao Ma
Jaewon Lee
Yuzhi Tang
Daniel Lee
Weisu Yin
Dongming Shen
Silin Meng
Yi Zhu
Mu Li
Alex Smola
Main:10 Pages
5 Figures
Bibliography:5 Pages
16 Tables
Appendix:5 Pages
Abstract

Automatic speech recognition (ASR) systems have achieved near-human accuracy on curated benchmarks, yet still fail in real-world voice agents under conditions that current evaluations do not systematically cover. Without diagnostic tools that isolate specific failure factors, practitioners cannot anticipate which conditions, in which languages, will cause what degree of degradation. We introduce WildASR, a multilingual (four-language) diagnostic benchmark sourced entirely from real human speech that factorizes ASR robustness along three axes: environmental degradation, demographic shift, and linguistic diversity. Evaluating seven widely used ASR systems, we find severe and uneven performance degradation, and model robustness does not transfer across languages or conditions. Critically, models often hallucinate plausible but unspoken content under partial or degraded inputs, creating concrete safety risks for downstream agent behavior. Our results demonstrate that targeted, factor-isolated evaluation is essential for understanding and improving ASR reliability in production systems. Besides the benchmark itself, we also present three analytical tools that practitioners can use to guide deployment decisions.

View on arXiv
Comments on this paper