ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19206
44
0

SpeakStream: Streaming Text-to-Speech with Interleaved Data

25 May 2025
Richard He Bai
Zijin Gu
Tatiana Likhomanenko
Navdeep Jaitly
    AuLLMAI4TS
ArXiv (abs)PDFHTML
Main:6 Pages
3 Figures
Bibliography:1 Pages
1 Tables
Abstract

The latency bottleneck of traditional text-to-speech (TTS) systems fundamentally hinders the potential of streaming large language models (LLMs) in conversational AI. These TTS systems, typically trained and inferenced on complete utterances, introduce unacceptable delays, even with optimized inference speeds, when coupled with streaming LLM outputs. This is particularly problematic for creating responsive conversational agents where low first-token latency is critical. In this paper, we present SpeakStream, a streaming TTS system that generates audio incrementally from streaming text using a decoder-only architecture. SpeakStream is trained using a next-step prediction loss on interleaved text-speech data. During inference, it generates speech incrementally while absorbing streaming input text, making it particularly suitable for cascaded conversational AI agents where an LLM streams text to a TTS system. Our experiments demonstrate that SpeakStream achieves state-of-the-art latency results in terms of first-token latency while maintaining the quality of non-streaming TTS systems.

View on arXiv
@article{bai2025_2505.19206,
  title={ SpeakStream: Streaming Text-to-Speech with Interleaved Data },
  author={ Richard He Bai and Zijin Gu and Tatiana Likhomanenko and Navdeep Jaitly },
  journal={arXiv preprint arXiv:2505.19206},
  year={ 2025 }
}
Comments on this paper