53
0

Speculative Decoding and Beyond: An In-Depth Survey of Techniques

Abstract

Sequential dependencies present a fundamental bottleneck in deploying large-scale autoregressive models, particularly for real-time applications. While traditional optimization approaches like pruning and quantization often compromise model quality, recent advances in generation-refinement frameworks demonstrate that this trade-off can be significantly mitigated.This survey presents a comprehensive taxonomy of generation-refinement frameworks, analyzing methods across autoregressive sequence tasks. We categorize methods based on their generation strategies (from simple n-gram prediction to sophisticated draft models) and refinement mechanisms (including single-pass verification and iterative approaches). Through systematic analysis of both algorithmic innovations and system-level implementations, we examine deployment strategies across computing environments and explore applications spanning text, images, and speech generation. This systematic examination of both theoretical frameworks and practical implementations provides a foundation for future research in efficient autoregressive decoding.

View on arXiv
@article{hu2025_2502.19732,
  title={ Speculative Decoding and Beyond: An In-Depth Survey of Techniques },
  author={ Yunhai Hu and Zining Liu and Zhenyuan Dong and Tianfan Peng and Bradley McDanel and Sai Qian Zhang },
  journal={arXiv preprint arXiv:2502.19732},
  year={ 2025 }
}
Comments on this paper