ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.02922
22
0

RetroInfer: A Vector-Storage Approach for Scalable Long-Context LLM Inference

5 May 2025
Y. Chen
J. Zhang
Baotong Lu
Qianxi Zhang
Chengruidong Zhang
Jingjia Luo
Di Liu
Huiqiang Jiang
Qi Chen
J. Liu
Bailu Ding
Xiao Yan
Jiawei Jiang
Chen Chen
Mingxing Zhang
Yuqing Yang
Fan Yang
Mao Yang
ArXivPDFHTML
Abstract

The growing context lengths of large language models (LLMs) pose significant challenges for efficient inference, primarily due to GPU memory and bandwidth constraints. We present RetroInfer, a novel system that reconceptualizes the key-value (KV) cache as a vector storage system which exploits the inherent attention sparsity to accelerate long-context LLM inference. At its core is the wave index, an Attention-aWare VEctor index that enables efficient and accurate retrieval of critical tokens through techniques such as tripartite attention approximation, accuracy-bounded attention estimation, and segmented clustering. Complementing this is the wave buffer, which coordinates KV cache placement and overlaps computation and data transfer across GPU and CPU to sustain high throughput. Unlike prior sparsity-based methods that struggle with token selection and hardware coordination, RetroInfer delivers robust performance without compromising model accuracy. Experiments on long-context benchmarks show up to 4.5X speedup over full attention within GPU memory limits and up to 10.5X over sparse attention baselines when KV cache is extended to CPU memory, all while preserving full-attention-level accuracy.

View on arXiv
@article{chen2025_2505.02922,
  title={ RetroInfer: A Vector-Storage Approach for Scalable Long-Context LLM Inference },
  author={ Yaoqi Chen and Jinkai Zhang and Baotong Lu and Qianxi Zhang and Chengruidong Zhang and Jingjia Luo and Di Liu and Huiqiang Jiang and Qi Chen and Jing Liu and Bailu Ding and Xiao Yan and Jiawei Jiang and Chen Chen and Mingxing Zhang and Yuqing Yang and Fan Yang and Mao Yang },
  journal={arXiv preprint arXiv:2505.02922},
  year={ 2025 }
}
Comments on this paper