ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.13035
14
0

D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models

18 June 2024
Zhongwei Wan
Xinjian Wu
Yu Zhang
Yi Xin
Chaofan Tao
Z. Zhu
Xin Wang
Siqi Luo
Jing Xiong
Mi Zhang
Mi Zhang
ArXivPDFHTML
Abstract

Generative inference in Large Language Models (LLMs) is impeded by the growing memory demands of Key-Value (KV) cache, especially for longer sequences. Traditional KV cache eviction strategies, which discard less critical KV pairs based on attention scores, often degrade generation quality, leading to issues such as context loss or hallucinations. In this work, we introduce Dynamic Discriminative Operations (D2O), a KV cache compression method that optimizes KV cache size dynamically and discriminatively at two levels without fine-tuning, while preserving essential context. At layer level, D2O leverages the varying densities of attention weights between shallow and deep layers to dynamically determine which layers should avoid excessive eviction via a novel dynamic allocation strategy to minimize information loss. At token level, D2O incorporates a compensation mechanism that maintains a similarity threshold to re-discriminate the importance of currently discarded tokens, determining whether they should be recalled and merged with similar tokens. We conduct experiments on various benchmarks and LLM architectures. Our results show that D2O not only achieves significant memory savings and enhances inference throughput by more than 3×\times× but also maintains high-quality long-text generation.

View on arXiv
@article{wan2025_2406.13035,
  title={ D2O: Dynamic Discriminative Operations for Efficient Long-Context Inference of Large Language Models },
  author={ Zhongwei Wan and Xinjian Wu and Yu Zhang and Yi Xin and Chaofan Tao and Zhihong Zhu and Xin Wang and Siqi Luo and Jing Xiong and Longyue Wang and Mi Zhang },
  journal={arXiv preprint arXiv:2406.13035},
  year={ 2025 }
}
Comments on this paper