SPIRe: Boosting LLM Inference Throughput with Speculative Decoding

Speculative decoding (SD) has been shown to reduce the latency of autoregressive decoding (AD) by 2-3x for small batch sizes. However, increasing throughput and therefore reducing the cost per token requires decoding with large batch sizes. Recent work shows that SD can accelerate decoding with large batch sizes too if the context is sufficiently long and the draft model's KV cache is sparse. We introduce SPIRe, a draft model that combines static sparse attention, pruned initialization, and feedback memory to increase the modeled throughput of speculative decoding by over 100% compared to speculation with a much smaller draft model and by over 35% compared to the strong baseline of sparse self-speculation. Our approach is particularly effective when context lengths vary significantly across requests.
View on arXiv@article{neelam2025_2504.06419, title={ SPIRe: Boosting LLM Inference Throughput with Speculative Decoding }, author={ Sanjit Neelam and Daniel Heinlein and Vaclav Cvicek and Akshay Mishra and Reiner Pope }, journal={arXiv preprint arXiv:2504.06419}, year={ 2025 } }