57
5

Enabling Auditory Large Language Models for Automatic Speech Quality Evaluation

Siyin Wang
Wenyi Yu
Yudong Yang
Changli Tang
Yixuan Li
Jimin Zhuang
Xianzhao Chen
Xiaohai Tian
Jun Zhang
Guangzhi Sun
Lu Lu
Yuxuan Wang
Chao Zhang
Abstract

Speech quality assessment typically requires evaluating audio from multiple aspects, such as mean opinion score (MOS) and speaker similarity (SIM) \etc., which can be challenging to cover using one small model designed for a single task. In this paper, we propose leveraging recently introduced auditory large language models (LLMs) for automatic speech quality assessment. By employing task-specific prompts, auditory LLMs are finetuned to predict MOS, SIM and A/B testing results, which are commonly used for evaluating text-to-speech systems. Additionally, the finetuned auditory LLM is able to generate natural language descriptions assessing aspects like noisiness, distortion, discontinuity, and overall quality, providing more interpretable outputs. Extensive experiments have been performed on the NISQA, BVCC, SOMOS and VoxSim speech quality datasets, using open-source auditory LLMs such as SALMONN, Qwen-Audio, and Qwen2-Audio. For the natural language descriptions task, a commercial model Google Gemini 1.5 Pro is also evaluated. The results demonstrate that auditory LLMs achieve competitive performance compared to state-of-the-art task-specific small models in predicting MOS and SIM, while also delivering promising results in A/B testing and natural language descriptions. Our data processing scripts and finetuned model checkpoints can be found atthis https URL.

View on arXiv
@article{wang2025_2409.16644,
  title={ Enabling Auditory Large Language Models for Automatic Speech Quality Evaluation },
  author={ Siyin Wang and Wenyi Yu and Yudong Yang and Changli Tang and Yixuan Li and Jimin Zhuang and Xianzhao Chen and Xiaohai Tian and Jun Zhang and Guangzhi Sun and Lu Lu and Yuxuan Wang and Chao Zhang },
  journal={arXiv preprint arXiv:2409.16644},
  year={ 2025 }
}
Comments on this paper