27

Hello-Chat: Towards Realistic Social Audio Interactions

Yueran Hou
Peilei Jia
Zihan Sun
Qihang Lu
Wenbing Yang
Yingming Gao
Ya Li
Jun Gao
Main:15 Pages
2 Figures
Bibliography:5 Pages
8 Tables
Appendix:2 Pages
Abstract

Recent advancements in Large Audio Language Models (LALMs) have demonstrated exceptional performance in speech recognition and translation. However, existing models often suffer from a disconnect between perception and expression, resulting in a robotic "read-speech" style that lacks the spontaneity and emotional resonance of real human interaction. In this report, we introduce Hello-Chat, an end-to-end audio language model designed for realistic social scenarios. By leveraging a massive dataset of real-life conversations and employing a modality-interleaved training strategy, Hello-Chat achieves a breakthrough in anthropomorphic generation. Experimental results show that our model not only reaches state-of-the-art (SOTA) performance on specific audio understanding tasks but also significantly outperforms existing baselines in prosodic naturalness and emotional alignment, paving the way for the next generation of empathetic AI agents.

View on arXiv
Comments on this paper