20
0

StreamMel: Real-Time Zero-shot Text-to-Speech via Interleaved Continuous Autoregressive Modeling

Main:4 Pages
2 Figures
Bibliography:1 Pages
1 Tables
Abstract

Recent advances in zero-shot text-to-speech (TTS) synthesis have achieved high-quality speech generation for unseen speakers, but most systems remain unsuitable for real-time applications because of their offline design. Current streaming TTS paradigms often rely on multi-stage pipelines and discrete representations, leading to increased computational cost and suboptimal system performance. In this work, we propose StreamMel, a pioneering single-stage streaming TTS framework that models continuous mel-spectrograms. By interleaving text tokens with acoustic frames, StreamMel enables low-latency, autoregressive synthesis while preserving high speaker similarity and naturalness. Experiments on LibriSpeech demonstrate that StreamMel outperforms existing streaming TTS baselines in both quality and latency. It even achieves performance comparable to offline systems while supporting efficient real-time generation, showcasing broad prospects for integration with real-time speech large language models. Audio samples are available at:this https URL.

View on arXiv
@article{wang2025_2506.12570,
  title={ StreamMel: Real-Time Zero-shot Text-to-Speech via Interleaved Continuous Autoregressive Modeling },
  author={ Hui Wang and Yifan Yang and Shujie Liu and Jinyu Li and Lingwei Meng and Yanqing Liu and Jiaming Zhou and Haoqin Sun and Yan Lu and Yong Qin },
  journal={arXiv preprint arXiv:2506.12570},
  year={ 2025 }
}
Comments on this paper