77

G-KV: Decoding-Time KV Cache Eviction with Global Attention

Abstract

Recent reasoning large language models (LLMs) excel in complex tasks but encounter significant computational and memory challenges due to long sequence lengths. KV cache compression has emerged as an effective approach to greatly enhance the efficiency of reasoning. However, existing methods often focus on prompt compression or token eviction with local attention score, overlooking the long-term importance of tokens. We propose G-KV, a KV cache eviction method that employs a global scoring mechanism, combining local and historical attention scores to more accurately assess token importance. Additionally, we introduce post-training techniques, including reinforcement learning and distillation, to optimize models for compressed KV cache settings. The code of this paper is available on:this https URL.

View on arXiv
Main:8 Pages
26 Figures
Bibliography:4 Pages
5 Tables
Appendix:12 Pages
Comments on this paper