ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.04060
32
0

VocalNet: Speech LLM with Multi-Token Prediction for Faster and High-Quality Generation

5 April 2025
Yuhao Wang
Heyang Liu
Ziyang Cheng
Ronghua Wu
Qunshan Gu
Yanfeng Wang
Yu Wang
ArXivPDFHTML
Abstract

Speech large language models (LLMs) have emerged as a prominent research focus in speech processing. We introduce VocalNet-1B and VocalNet-8B, a series of high-performance, low-latency speech LLMs enabled by a scalable and model-agnostic training framework designed for real-time voice interaction. Central to our contribution is the first application of multi-token prediction (MTP) to speech LLMs. This approach represents a paradigm shift from standard next-token prediction (NTP), offering simultaneous improvements in generation speed and quality. Informed by analysis of MTP's effect on speech generation and experimental comparisons, we designed a straightforward and highly effective MTP implementation. Experiments demonstrate that VocalNet performs on par with mainstream Omni LLMs even with limited training data, and significantly surpasses existing open-source speech LLMs. To foster reproducibility and community advancement, all model weights, inference code, training data, and framework implementations have been made publicly available atthis https URL

View on arXiv
@article{wang2025_2504.04060,
  title={ VocalNet: Speech LLM with Multi-Token Prediction for Faster and High-Quality Generation },
  author={ Yuhao Wang and Heyang Liu and Ziyang Cheng and Ronghua Wu and Qunshan Gu and Yanfeng Wang and Yu Wang },
  journal={arXiv preprint arXiv:2504.04060},
  year={ 2025 }
}
Comments on this paper