Towards Efficient Key-Value Cache Management for Prefix Prefilling in LLM Inference

The increasing adoption of large language models (LLMs) with extended context windows necessitates efficient Key-Value Cache (KVC) management to optimize inference performance. Inference workloads like Retrieval-Augmented Generation (RAG) and agents exhibit high cache reusability, making efficient caching critical to reducing redundancy and improving speed. We analyze real-world KVC access patterns using publicly available traces and evaluate commercial key-value stores like Redis and state-of-the-art RDMA-based systems (CHIME [1] and Sherman [2]) for KVC metadata management. Our work demonstrates the lack of tailored storage solution for KVC prefilling, underscores the need for an efficient distributed caching system with optimized metadata management for LLM workloads, and provides insights into designing improved KVC management systems for scalable, low-latency inference.
View on arXiv@article{zhu2025_2505.21919, title={ Towards Efficient Key-Value Cache Management for Prefix Prefilling in LLM Inference }, author={ Yue Zhu and Hao Yu and Chen Wang and Zhuoran Liu and Eun Kyung Lee }, journal={arXiv preprint arXiv:2505.21919}, year={ 2025 } }