ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16002
28
2

KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse

21 February 2025
Jingbo Yang
Bairu Hou
Wei Wei
Yujia Bao
Shiyu Chang
    VLM
ArXivPDFHTML
Abstract

We describe KVLink, an approach for efficient key-value (KV) cache reuse in large language models (LLMs). In many LLM applications, different inputs can share overlapping context, such as the same retrieved document appearing in multiple queries. However, the LLMs still need to encode the entire context for each query, leading to redundant computation. In this paper, we propose a new strategy to eliminate such inefficiency, where the KV cache of each document is precomputed independently. During inference, the KV caches of retrieved documents are concatenated, allowing the model to reuse cached representations instead of recomputing them. To mitigate the performance degradation of LLMs when using KV caches computed independently for each document, KVLink introduces three key components: adjusting positional embeddings of the KV cache at inference to match the global position after concatenation, using trainable special tokens to restore self-attention across independently encoded documents, and applying mixed-data fine-tuning to enhance performance while preserving the model's original capabilities. Experiments across 7 datasets demonstrate that KVLink improves question answering accuracy by an average of 4% over state-of-the-art methods. Furthermore, by leveraging precomputed KV caches, our approach reduces time-to-first-token by up to 90% compared to standard LLM inference, making it a scalable and efficient solution for context reuse.

View on arXiv
@article{yang2025_2502.16002,
  title={ KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse },
  author={ Jingbo Yang and Bairu Hou and Wei Wei and Yujia Bao and Shiyu Chang },
  journal={arXiv preprint arXiv:2502.16002},
  year={ 2025 }
}
Comments on this paper