ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19462
11
0
v1v2 (latest)

VoiceStar: Robust Zero-Shot Autoregressive TTS with Duration Control and Extrapolation

26 May 2025
Puyuan Peng
Shang-Wen Li
Abdelrahman Mohamed
David Harwath
ArXiv (abs)PDFHTML
Main:9 Pages
13 Figures
Bibliography:6 Pages
8 Tables
Appendix:6 Pages
Abstract

We present VoiceStar, the first zero-shot TTS model that achieves both output duration control and extrapolation. VoiceStar is an autoregressive encoder-decoder neural codec language model, that leverages a novel Progress-Monitoring Rotary Position Embedding (PM-RoPE) and is trained with Continuation-Prompt Mixed (CPM) training. PM-RoPE enables the model to better align text and speech tokens, indicates the target duration for the generated speech, and also allows the model to generate speech waveforms much longer in duration than those seen during. CPM training also helps to mitigate the training/inference mismatch, and significantly improves the quality of the generated speech in terms of speaker similarity and intelligibility. VoiceStar outperforms or is on par with current state-of-the-art models on short-form benchmarks such as Librispeech and Seed-TTS, and significantly outperforms these models on long-form/extrapolation benchmarks (20-50s) in terms of intelligibility and naturalness. Code and model weights: this https URL

View on arXiv
@article{peng2025_2505.19462,
  title={ VoiceStar: Robust Zero-Shot Autoregressive TTS with Duration Control and Extrapolation },
  author={ Puyuan Peng and Shang-Wen Li and Abdelrahman Mohamed and David Harwath },
  journal={arXiv preprint arXiv:2505.19462},
  year={ 2025 }
}
Comments on this paper