ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.11049
31
22

MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding

20 August 2024
Jian Chen
Vashisth Tiwari
Ranajoy Sadhukhan
Zhuoming Chen
Jinyuan Shi
Ian En-Hsu Yen
Ian En-Hsu Yen
Avner May
Tianqi Chen
Beidi Chen
    LRM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have become more prevalent in long-context applications such as interactive chatbots, document analysis, and agent workflows, but it is challenging to serve long-context requests with low latency and high throughput. Speculative decoding (SD) is a widely used technique to reduce latency losslessly, but the conventional wisdom suggests that its efficacy is limited to small batch sizes. In MagicDec, we show that surprisingly SD can achieve speedup even for a high throughput inference regime for moderate to long sequences. More interestingly, an intelligent drafting strategy can achieve better speedup with increasing batch size based on our rigorous analysis. MagicDec first identifies the bottleneck shifts with increasing batch size and sequence length, and uses these insights to deploy SD more effectively for high throughput inference. We leverage draft model with sparse KV cache to address the KV bottleneck, which scales with both sequence length and batch size. Additionally, we propose a theoretical model to select the optimal drafting strategy for maximum speedup. Our work highlights the broad applicability of speculative decoding in long-context serving, as it can enhance throughput and reduce latency without compromising accuracy. For moderate to long sequences, we demonstrate up to 2.51x speedup for Llama3.1-8B when serving batch sizes ranging from 32 to 256 on various types of hardware and tasks.

View on arXiv
@article{sadhukhan2025_2408.11049,
  title={ MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding },
  author={ Ranajoy Sadhukhan and Jian Chen and Zhuoming Chen and Vashisth Tiwari and Ruihang Lai and Jinyuan Shi and Ian En-Hsu Yen and Avner May and Tianqi Chen and Beidi Chen },
  journal={arXiv preprint arXiv:2408.11049},
  year={ 2025 }
}
Comments on this paper