13
0

TESU-LLM: Training Speech-LLMs Without Speech via Unified Encoder Alignment

Main:4 Pages
2 Figures
Bibliography:1 Pages
Abstract

Recent advances in speech-enabled language models have shown promising results in building intelligent voice assistants. However, most existing approaches rely on large-scale paired speech-text data and extensive computational resources, which pose challenges in terms of scalability and accessibility. In this paper, we present \textbf{TESU-LLM}, a novel framework that enables training speech-capable language models using only text data. Our key insight is to leverage a unified encoder that maps semantically equivalent text and speech inputs to a shared latent space. By aligning the encoder output with the embedding space of a LLM via a lightweight projection network, we enable the model to generalize from text-only supervision to speech-based inference. Despite being trained exclusively on text, TESU-LLM achieves strong performance on various speech-related benchmarks, comparable to baseline methods trained with large-scale multimodal datasets and substantial computational resources. These results highlight the effectiveness and efficiency of our approach, offering a scalable path toward building speech LLMs without speech data.

View on arXiv
@article{kim2025_2506.06343,
  title={ TESU-LLM: Training Speech-LLMs Without Speech via Unified Encoder Alignment },
  author={ Taesoo Kim and Jong Hwan Ko },
  journal={arXiv preprint arXiv:2506.06343},
  year={ 2025 }
}
Comments on this paper