ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.11816
39
0

Key, Value, Compress: A Systematic Exploration of KV Cache Compression Techniques

14 March 2025
Neusha Javidnia
B. Rouhani
F. Koushanfar
ArXivPDFHTML
Abstract

Large language models (LLMs) have demonstrated exceptional capabilities in generating text, images, and video content. However, as context length grows, the computational cost of attention increases quadratically with the number of tokens, presenting significant efficiency challenges. This paper presents an analysis of various Key-Value (KV) cache compression strategies, offering a comprehensive taxonomy that categorizes these methods by their underlying principles and implementation techniques. Furthermore, we evaluate their impact on performance and inference latency, providing critical insights into their effectiveness. Our findings highlight the trade-offs involved in KV cache compression and its influence on handling long-context scenarios, paving the way for more efficient LLM implementations.

View on arXiv
@article{javidnia2025_2503.11816,
  title={ Key, Value, Compress: A Systematic Exploration of KV Cache Compression Techniques },
  author={ Neusha Javidnia and Bita Darvish Rouhani and Farinaz Koushanfar },
  journal={arXiv preprint arXiv:2503.11816},
  year={ 2025 }
}
Comments on this paper