187
v1v2 (latest)

Preservation of Language Understanding Capabilities in Speech-aware Large Language Models

Main:2 Pages
1 Figures
Bibliography:2 Pages
1 Tables
Appendix:1 Pages
Abstract

The paper presents C3T (Cross-modal Capabilities Conservation Test), a new benchmark for assessing the performance of speech-aware large language models. The benchmark utilizes textual tasks and a voice cloning text-to-speech model to quantify the extent to which language understanding capabilities are preserved when the model is accessed via speech input. C3T quantifies the fairness of the model for different categories of speakers and its robustness across text and speech modalities.

View on arXiv
Comments on this paper