19
0

PipeSpec: Breaking Stage Dependencies in Hierarchical LLM Decoding

Abstract

Speculative decoding accelerates large language model inference by using smaller draft models to generate candidate tokens for parallel verification. However, current approaches are limited by sequential stage dependencies that prevent full hardware utilization. We present PipeSpec, a framework that generalizes speculative decoding to kk models arranged in a hierarchical pipeline, enabling asynchronous execution with lightweight coordination for prediction verification and rollback. Our analytical model characterizes token generation rates across pipeline stages and proves guaranteed throughput improvements over traditional decoding for any non-zero acceptance rate. We further derive closed-form expressions for steady-state verification probabilities that explain the empirical benefits of pipeline depth. Experimental results show that PipeSpec achieves up to 2.54×\times speedup while outperforming state-of-the-art methods. We validate PipeSpec across text summarization and code generation tasks using LLaMA 2 and 3 models, demonstrating that pipeline efficiency increases with model depth, providing a scalable approach to accelerating LLM inference on multi-device systems.

View on arXiv
@article{mcdanel2025_2505.01572,
  title={ PipeSpec: Breaking Stage Dependencies in Hierarchical LLM Decoding },
  author={ Bradley McDanel and Sai Qian Zhang and Yunhai Hu and Zining Liu },
  journal={arXiv preprint arXiv:2505.01572},
  year={ 2025 }
}
Comments on this paper