Reducing Bias in Production Speech Models
Eric Battenberg
R. Child
Adam Coates
Christopher Fougner
Yashesh Gaur
Jiaji Huang
Heewoo Jun
Ajay Kannan
Markus Kliegl
Atul Kumar
Hairong Liu
Vinay Rao
S. Satheesh
David Seetapun
Anuroop Sriram
Zhenyao Zhu

Abstract
Replacing hand-engineered pipelines with end-to-end deep learning systems has enabled strong results in applications like speech and object recognition. However, the causality and latency constraints of production systems put end-to-end speech models back into the underfitting regime and expose biases in the model that we show cannot be overcome by "scaling up", i.e., training bigger models on more data. In this work we systematically identify and address sources of bias, reducing error rates by up to 20% while remaining practical for deployment. We achieve this by utilizing improved neural architectures for streaming inference, solving optimization issues, and employing strategies that increase audio and label modelling versatility.
View on arXivComments on this paper