Leveraging LLM for Stuttering Speech: A Unified Architecture Bridging Recognition and Event Detection

The performance bottleneck of Automatic Speech Recognition (ASR) in stuttering speech scenarios has limited its applicability in domains such as speech rehabilitation. This paper proposed an LLM-driven ASR-SED multi-task learning framework that jointly optimized the ASR and Stuttering Event Detection (SED) tasks. We proposed a dynamic interaction mechanism where the ASR branch leveraged CTC-generated soft prompts to assist LLM context modeling, while the SED branch output stutter embeddings to enhance LLM comprehension of stuttered speech. We incorporated contrastive learning to strengthen the discriminative power of stuttering acoustic features and applied Focal Loss to mitigate the long-tailed distribution in stuttering event categories. Evaluations on the AS-70 Mandarin stuttering dataset demonstrated that our framework reduced the ASR character error rate (CER) to 5.45% (-37.71% relative reduction) and achieved an average SED F1-score of 73.63% (+46.58% relative improvement).
View on arXiv@article{huang2025_2505.22005, title={ Leveraging LLM for Stuttering Speech: A Unified Architecture Bridging Recognition and Event Detection }, author={ Shangkun Huang and Jing Deng and Jintao Kang and Rong Zheng }, journal={arXiv preprint arXiv:2505.22005}, year={ 2025 } }