93
0

Soundwave: Less is More for Speech-Text Alignment in LLMs

Abstract

Existing end-to-end speech large language models (LLMs) usually rely on large-scale annotated data for training, while data-efficient training has not been discussed in depth. We focus on two fundamental problems between speech and text: the representation space gap and sequence length inconsistency. We propose Soundwave, which utilizes an efficient training strategy and a novel architecture to address these issues. Results show that Soundwave outperforms the advanced Qwen2-Audio in speech translation and AIR-Bench speech tasks, using only one-fiftieth of the training data. Further analysis shows that Soundwave still retains its intelligence during conversation. The project is available atthis https URL.

View on arXiv
@article{zhang2025_2502.12900,
  title={ Soundwave: Less is More for Speech-Text Alignment in LLMs },
  author={ Yuhao Zhang and Zhiheng Liu and Fan Bu and Ruiyu Zhang and Benyou Wang and Haizhou Li },
  journal={arXiv preprint arXiv:2502.12900},
  year={ 2025 }
}
Comments on this paper