45
3

Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction

Abstract

We introduce Baichuan-Audio, an end-to-end audio large language model that seamlessly integrates audio understanding and generation. It features a text-guided aligned speech generation mechanism, enabling real-time speech interaction with both comprehension and generation capabilities. Baichuan-Audio leverages a pre-trained ASR model, followed by multi-codebook discretization of speech at a frame rate of 12.5 Hz. This multi-codebook setup ensures that speech tokens retain both semantic and acoustic information. To further enhance modeling, an independent audio head is employed to process audio tokens, effectively capturing their unique characteristics. To mitigate the loss of intelligence during pre-training and preserve the original capabilities of the LLM, we propose a two-stage pre-training strategy that maintains language understanding while enhancing audio modeling. Following alignment, the model excels in real-time speech-based conversation and exhibits outstanding question-answering capabilities, demonstrating its versatility and efficiency. The proposed model demonstrates superior performance in real-time spoken dialogue and exhibits strong question-answering abilities. Our code, model and training data are available atthis https URL

View on arXiv
@article{li2025_2502.17239,
  title={ Baichuan-Audio: A Unified Framework for End-to-End Speech Interaction },
  author={ Tianpeng Li and Jun Liu and Tao Zhang and Yuanbo Fang and Da Pan and Mingrui Wang and Zheng Liang and Zehuan Li and Mingan Lin and Guosheng Dong and Jianhua Xu and Haoze Sun and Zenan Zhou and Weipeng Chen },
  journal={arXiv preprint arXiv:2502.17239},
  year={ 2025 }
}
Comments on this paper