27
0

Elevating Robust Multi-Talker ASR by Decoupling Speaker Separation and Speech Recognition

Abstract

Despite the tremendous success of automatic speech recognition (ASR) with the introduction of deep learning, its performance is still unsatisfactory in many real-world multi-talker scenarios. Speaker separation excels in separating individual talkers but, as a frontend, it introduces processing artifacts that degrade the ASR backend trained on clean speech. As a result, mainstream robust ASR systems train the backend on noisy speech to avoid processing artifacts. In this work, we propose to decouple the training of the speaker separation frontend and the ASR backend, with the latter trained on clean speech only. Our decoupled system achieves 5.1% word error rates (WER) on the Libri2Mix dev/test sets, significantly outperforming other multi-talker ASR baselines. Its effectiveness is also demonstrated with the state-of-the-art 7.60%/5.74% WERs on 1-ch and 6-ch SMS-WSJ. Furthermore, on recorded LibriCSS, we achieve the speaker-attributed WER of 2.92%. These state-of-the-art results suggest that decoupling speaker separation and recognition is an effective approach to elevate robust multi-talker ASR.

View on arXiv
@article{yang2025_2503.17886,
  title={ Elevating Robust Multi-Talker ASR by Decoupling Speaker Separation and Speech Recognition },
  author={ Yufeng Yang and Hassan Taherian and Vahid Ahmadi Kalkhorani and DeLiang Wang },
  journal={arXiv preprint arXiv:2503.17886},
  year={ 2025 }
}
Comments on this paper