ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.09419
65
1

On multi-token prediction for efficient LLM inference

13 February 2025
Somesh Mehra
Javier Alonso García
Lukas Mauch
    LRM
ArXivPDFHTML
Abstract

We systematically investigate multi-token prediction (MTP) capabilities within LLMs pre-trained for next-token prediction (NTP). We first show that such models inherently possess MTP capabilities via numerical marginalization over intermediate token probabilities, though performance is data-dependent and improves with model scale. Furthermore, we explore the challenges of integrating MTP heads into frozen LLMs and find that their hidden layers are strongly specialized for NTP, making adaptation non-trivial. Finally, we show that while joint training of MTP heads with the backbone improves performance, it cannot fully overcome this barrier, prompting further research in this direction. Our findings provide a deeper understanding of MTP applied to pretrained LLMs, informing strategies for accelerating inference through parallel token prediction.

View on arXiv
@article{mehra2025_2502.09419,
  title={ On multi-token prediction for efficient LLM inference },
  author={ Somesh Mehra and Javier Alonso Garcia and Lukas Mauch },
  journal={arXiv preprint arXiv:2502.09419},
  year={ 2025 }
}
Comments on this paper