ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.03131
81
5

Unifying KV Cache Compression for Large Language Models with LeanKV

4 December 2024
Yanqi Zhang
Yuwei Hu
Runyuan Zhao
John C. S. Lui
Haibo Chen
    MQ
ArXivPDFHTML
Abstract

Large language models (LLMs) exhibit exceptional performance but incur significant serving costs due to their substantial memory requirements, with the key-value (KV) cache being a primary bottleneck. Existing KV cache compression techniques, such as quantization and pruning, apply uniform treatment to both keys and values, and discard unimportant tokens entirely, overlooking the fine-grained differences in significance of various components within the KV cache. To address these limitations, we introduce LeanKV, a framework that advances KV cache compression by exploiting three levels of differentiation in the KV cache: (1) the differing impact of keys and values on attention computation, (2) the varying importance of tokens, and (3) the diverse dynamic sparsity patterns across attention heads. At the core of LeanKV is an on-GPU memory manager that compacts fragmented free memory list into contiguous regions in parallel, effectively translating sparsity in the KV cache into performance gains. We evaluate LeanKV on several mainstream models, including the recent "thinking model". LeanKV is able to compress the KV cache by 2.7×2.7\times2.7× to 5.7×5.7\times5.7× with near-lossless accuracy on complex workloads requiring sophisticated reasoning and long-generation capabilities, and enhances throughput by 1.9×1.9\times1.9× to 5.4×5.4\times5.4×.

View on arXiv
@article{zhang2025_2412.03131,
  title={ Unifying KV Cache Compression for Large Language Models with LeanKV },
  author={ Yanqi Zhang and Yuwei Hu and Runyuan Zhao and John C.S. Lui and Haibo Chen },
  journal={arXiv preprint arXiv:2412.03131},
  year={ 2025 }
}
Comments on this paper