26
0

AlayaDB: The Data Foundation for Efficient and Effective Long-context LLM Inference

Abstract

AlayaDB is a cutting-edge vector database system natively architected for efficient and effective long-context inference for Large Language Models (LLMs) at AlayaDB AI. Specifically, it decouples the KV cache and attention computation from the LLM inference systems, and encapsulates them into a novel vector database system. For the Model as a Service providers (MaaS), AlayaDB consumes fewer hardware resources and offers higher generation quality for various workloads with different kinds of Service Level Objectives (SLOs), when comparing with the existing alternative solutions (e.g., KV cache disaggregation, retrieval-based sparse attention). The crux of AlayaDB is that it abstracts the attention computation and cache management for LLM inference into a query processing procedure, and optimizes the performance via a native query optimizer. In this work, we demonstrate the effectiveness of AlayaDB via (i) three use cases from our industry partners, and (ii) extensive experimental results on LLM inference benchmarks.

View on arXiv
@article{deng2025_2504.10326,
  title={ AlayaDB: The Data Foundation for Efficient and Effective Long-context LLM Inference },
  author={ Yangshen Deng and Zhengxin You and Long Xiang and Qilong Li and Peiqi Yuan and Zhaoyang Hong and Yitao Zheng and Wanting Li and Runzhong Li and Haotian Liu and Kyriakos Mouratidis and Man Lung Yiu and Huan Li and Qiaomu Shen and Rui Mao and Bo Tang },
  journal={arXiv preprint arXiv:2504.10326},
  year={ 2025 }
}
Comments on this paper