ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.10714
44
0

ZSMerge: Zero-Shot KV Cache Compression for Memory-Efficient Long-Context LLMs

13 March 2025
Xin Liu
Pei Liu
Guoming Tang
    MoMe
ArXivPDFHTML
Abstract

The linear growth of key-value (KV) cache memory and quadratic computational in attention mechanisms complexity pose significant bottlenecks for large language models (LLMs) in long-context processing. While existing KV cache optimization methods address these challenges through token pruning or feature merging, they often incur irreversible information loss or require costly parameter retraining. To this end, we propose ZSMerge, a dynamic KV cache compression framework designed for efficient cache management, featuring three key operations: (1) fine-grained memory allocation guided by multi-dimensional token importance metrics at head-level granularity, (2) a residual merging mechanism that preserves critical context through compensated attention scoring, and (3) a zero-shot adaptation mechanism compatible with diverse LLM architectures without requiring retraining. ZSMerge significantly enhances memory efficiency and inference speed with negligible performance degradation across LLMs. When applied to LLaMA2-7B, it demonstrates a 20:1 compression ratio for key-value cache retention (reducing memory footprint to 5\% of baseline) while sustaining comparable generation quality, coupled with triple throughput gains at extreme 54k-token contexts that eliminate out-of-memory failures. The code is available atthis https URL.

View on arXiv
@article{liu2025_2503.10714,
  title={ ZSMerge: Zero-Shot KV Cache Compression for Memory-Efficient Long-Context LLMs },
  author={ Xin Liu and Pei Liu and Guoming Tang },
  journal={arXiv preprint arXiv:2503.10714},
  year={ 2025 }
}
Comments on this paper