31
19

Seed-ASR: Understanding Diverse Speech and Contexts with LLM-based Speech Recognition

Ye Bai
Jingping Chen
Jitong Chen
Wei Chen
Zhuo Chen
Chuang Ding
Linhao Dong
Qianqian Dong
Yujiao Du
Kepan Gao
Lu Gao
Yi Guo
Minglun Han
Ting-Ting Han
Wenchao Hu
Xinying Hu
Yuxiang Hu
Deyu Hua
Lu Huang
Mingkun Huang
Youjia Huang
Jishuo Jin
Fanliu Kong
Zongwei Lan
Tianyu Li
Xiaoyang Li
Zeyang Li
Zehua Lin
Rui Liu
Shouda Liu
Lu Lu
Yizhou Lu
Jingting Ma
Shengtao Ma
Yulin Pei
Chen Shen
Tian Tan
Xiaogang Tian
Ming Tu
Bo Wang
Hao Wang
Yuping Wang
Yuxuan Wang
Hanzhang Xia
Rui Xia
Shuangyi Xie
Hongmin Xu
Meng Yang
Bihong Zhang
Jun Zhang
Wanyi Zhang
Yang Zhang
Yawei Zhang
Yijie Zheng
Ming Zou
Abstract

Modern automatic speech recognition (ASR) model is required to accurately transcribe diverse speech signals (from different domains, languages, accents, etc) given the specific contextual information in various application scenarios. Classic end-to-end models fused with extra language models perform well, but mainly in data matching scenarios and are gradually approaching a bottleneck. In this work, we introduce Seed-ASR, a large language model (LLM) based speech recognition model. Seed-ASR is developed based on the framework of audio conditioned LLM (AcLLM), leveraging the capabilities of LLMs by inputting continuous speech representations together with contextual information into the LLM. Through stage-wise large-scale training and the elicitation of context-aware capabilities in LLM, Seed-ASR demonstrates significant improvement over end-to-end models on comprehensive evaluation sets, including multiple domains, accents/dialects and languages. Additionally, Seed-ASR can be further deployed to support specific needs in various scenarios without requiring extra language models. Compared to recently released large ASR models, Seed-ASR achieves 10%-40% reduction in word (or character, for Chinese) error rates on Chinese and English public test sets, further demonstrating its powerful performance.

View on arXiv
Comments on this paper