4

Language Family Matters: Evaluating LLM-Based ASR Across Linguistic Boundaries

Yuchen Zhang
Ravi Shekhar
Haralambos Mouratidis
Main:5 Pages
2 Figures
Bibliography:2 Pages
10 Tables
Appendix:6 Pages
Abstract

Large Language Model (LLM)-powered Automatic Speech Recognition (ASR) systems achieve strong performance with limited resources by linking a frozen speech encoder to a pretrained LLM via a lightweight connector. Prior work trains a separate connector per language, overlooking linguistic relatedness. We propose an efficient and novel connector-sharing strategy based on linguistic family membership, enabling one connector per family, and empirically validate its effectiveness across two multilingual LLMs and two real-world corpora spanning curated and crowd-sourced speech. Our results show that family-based connectors reduce parameter count while improving generalization across domains, offering a practical and scalable strategy for multilingual ASR deployment.

View on arXiv
Comments on this paper