Large language models (LLMs) exhibit exceptional performance but incur significant serving costs due to their substantial memory requirements, with the key-value (KV) cache being a primary bottleneck. Existing KV cache compression techniques, such as quantization and pruning, apply uniform treatment to both keys and values, and discard unimportant tokens entirely, overlooking the fine-grained differences in significance of various components within the KV cache. To address these limitations, we introduce LeanKV, a framework that advances KV cache compression by exploiting three levels of differentiation in the KV cache: (1) the differing impact of keys and values on attention computation, (2) the varying importance of tokens, and (3) the diverse dynamic sparsity patterns across attention heads. At the core of LeanKV is an on-GPU memory manager that compacts fragmented free memory list into contiguous regions in parallel, effectively translating sparsity in the KV cache into performance gains. We evaluate LeanKV on several mainstream models, including the recent "thinking model". LeanKV is able to compress the KV cache by to with near-lossless accuracy on complex workloads requiring sophisticated reasoning and long-generation capabilities, and enhances throughput by to .
View on arXiv@article{zhang2025_2412.03131, title={ Unifying KV Cache Compression for Large Language Models with LeanKV }, author={ Yanqi Zhang and Yuwei Hu and Runyuan Zhao and John C.S. Lui and Haibo Chen }, journal={arXiv preprint arXiv:2412.03131}, year={ 2025 } }