520
v1v2v3 (latest)

Token-Driven GammaTune: Adaptive Calibration for Enhanced Speculative Decoding

Main:4 Pages
2 Figures
Bibliography:2 Pages
2 Tables
Abstract

Speculative decoding accelerates large language model (LLM) inference by using a smaller draft model to propose tokens, which are then verified by a larger target model. However, selecting an optimal speculation length is critical for maximizing speedup while minimizing wasted computation. We introduce \textit{GammaTune} and \textit{GammaTune+}, training-free adaptive algorithms that dynamically adjust speculation length based on token acceptance rates using a heuristic-based switching mechanism. Evaluated on SpecBench across multiple tasks and model pairs, our method outperforms other heuristic-based approaches and fixed-length speculative decoding, achieving an average speedup of 15\% (±\pm5\%) with \textit{GammaTune} and 16\% (±\pm3\%) with \textit{GammaTune+}, while reducing performance variance. This makes \textit{GammaTune} a robust and efficient solution for real-world deployment.

View on arXiv
Comments on this paper