Cacheback: Speculative Decoding With Nothing But Cache
Zhiyao Ma
In Gim
Lin Zhong
- BDL

Main:5 Pages
4 Figures
Bibliography:1 Pages
1 Tables
Abstract
We present Cacheback Decoding, a training-free and model-agnostic speculative decoding method that exploits the locality in language to accelerate Large Language Model (LLM) inference. Cacheback leverages only Least Recently Used (LRU) cache tables of token n-grams to generate draft sequences. Cacheback achieves state-of-the-art performance among comparable methods despite its minimalist design, and its simplicity allows easy integration into existing systems. Cacheback also shows potential for fast adaptation to new domains.
View on arXivComments on this paper
