230
v1v2 (latest)

VLCache: Computing 2% Vision Tokens and Reusing 98% for Vision-Language Inference

Shengling Qin
Hao Yu
Chenxin Wu
Zheng Li
Yizhong Cao
Zhengyang Zhuge
Yuxin Zhou
Wentao Yao
Yi Zhang
Zhengheng Wang
Shuai Bai
Jianwei Zhang
Junyang Lin
Main:9 Pages
5 Figures
Bibliography:2 Pages
13 Tables
Appendix:3 Pages
Abstract

This paper presents VLCache, a cache reuse framework that exploits both Key-Value (KV) cache and encoder cache from prior multimodal inputs to eliminate costly recomputation when the same multimodal inputs recur. Unlike previous heuristic approaches, we formally identify the cumulative reuse error effect and demonstrate how to minimize the non-prefix cache reuse error effectively. We further analyze the varying importance of model layers and propose a dynamic, layer-aware recomputation strategy to balance accuracy and efficiency. Experimental results show that VLCache achieves an accuracy on par with full recomputation, while requiring only 2-5% of the tokens to compute, yielding 1.2x-16x TTFT speedups. We develop an experimental implementation of the proposed VLCache pipeline based on SGLang, enabling significantly faster inference in practical deployments.

View on arXiv
Comments on this paper