0

SAGE: Accelerating Vision-Language Models via Entropy-Guided Adaptive Speculative Decoding

Yujia Tong
Tian Zhang
Yunyang Wan
Kaiwei Lin
Jingling Yuan
Chuang Hu
Abstract

Speculative decoding has emerged as a promising approach to accelerate inference in vision-language models (VLMs) by enabling parallel verification of multiple draft tokens. However, existing methods rely on static tree structures that remain fixed throughout the decoding process, failing to adapt to the varying prediction difficulty across generation steps. This leads to suboptimal acceptance lengths and limited speedup. In this paper, we propose SAGE, a novel framework that dynamically adjusts the speculation tree structure based on real-time prediction uncertainty. Our key insight is that output entropy serves as a natural confidence indicator with strong temporal correlation across decoding steps. SAGE constructs deeper-narrower trees for high-confidence predictions to maximize speculation depth, and shallower-wider trees for uncertain predictions to diversify exploration. SAGE improves acceptance lengths and achieves faster acceleration compared to static tree baselines. Experiments on multiple benchmarks demonstrate the effectiveness of SAGE: without any loss in output quality, it delivers up to 3.36×3.36\times decoding speedup for LLaVA-OneVision-72B and 3.18×3.18\times for Qwen2.5-VL-72B.

View on arXiv
Comments on this paper