ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19669
65
1
v1v2 (latest)

Zero-Shot Streaming Text to Speech Synthesis with Transducer and Auto-Regressive Modeling

26 May 2025
Haiyang Sun
Shujie Hu
Shujie Liu
L. Meng
Hui Wang
Bing Han
Yifan Yang
Yanqing Liu
Sheng Zhao
Yan Lu
Y. Qian
ArXiv (abs)PDFHTML
Main:4 Pages
5 Figures
Bibliography:1 Pages
4 Tables
Abstract

Zero-shot streaming text-to-speech is an important research topic in human-computer interaction. Existing methods primarily use a lookahead mechanism, relying on future text to achieve natural streaming speech synthesis, which introduces high processing latency. To address this issue, we propose SMLLE, a streaming framework for generating high-quality speech frame-by-frame. SMLLE employs a Transducer to convert text into semantic tokens in real time while simultaneously obtaining duration alignment information. The combined outputs are then fed into a fully autoregressive (AR) streaming model to reconstruct mel-spectrograms. To further stabilize the generation process, we design a Delete < Bos > Mechanism that allows the AR model to access future text introducing as minimal delay as possible. Experimental results suggest that the SMLLE outperforms current streaming TTS methods and achieves comparable performance over sentence-level TTS systems. Samples are available on this https URL.

View on arXiv
@article{sun2025_2505.19669,
  title={ Zero-Shot Streaming Text to Speech Synthesis with Transducer and Auto-Regressive Modeling },
  author={ Haiyang Sun and Shujie Hu and Shujie Liu and Lingwei Meng and Hui Wang and Bing Han and Yifan Yang and Yanqing Liu and Sheng Zhao and Yan Lu and Yanmin Qian },
  journal={arXiv preprint arXiv:2505.19669},
  year={ 2025 }
}
Comments on this paper