71

AIBrix: Towards Scalable, Cost-Effective Large Language Model Inference Infrastructure

AIBrix Team
Jiaxin Shan
Varun Gupta
Le Xu
Haiyang Shi
Jingyuan Zhang
Ning Wang
Linhui Xu
Rong Kang
Tongping Liu
Yifei Zhang
Yiqing Zhu
Shuowei Jin
Gangmuk Lim
Binbin Chen
Zuzhi Chen
Xiao Liu
Xin Chen
Kante Yin
Chak-Pong Chung
Chenyu Jiang
Yicheng Lu
Jianjun Chen
Caixue Lin
Wu Xiang
Rui Shi
Liguang Xie
Main:8 Pages
10 Figures
Bibliography:3 Pages
1 Tables
Appendix:1 Pages
Abstract

We introduce AIBrix, a cloud-native, open-source framework designed to optimize and simplify large-scale LLM deployment in cloud environments. Unlike traditional cloud-native stacks, AIBrix follows a co-design philosophy, ensuring every layer of the infrastructure is purpose-built for seamless integration with inference engines like vLLM. AIBrix introduces several key innovations to reduce inference costs and enhance performance including high-density LoRA management for dynamic adapter scheduling, LLM-specific autoscalers, and prefix-aware, load-aware routing. To further improve efficiency, AIBrix incorporates a distributed KV cache, boosting token reuse across nodes, leading to a 50% increase in throughput and a 70% reduction in inference latency. AIBrix also supports unified AI runtime which streamlines model management while maintaining vendor-agnostic engine compatibility. For large-scale multi-node inference, AIBrix employs hybrid orchestration -- leveraging Kubernetes for coarse-grained scheduling and Ray for fine-grained execution -- to balance efficiency and flexibility. Additionally, an SLO-driven GPU optimizer dynamically adjusts resource allocations, optimizing heterogeneous serving to maximize cost efficiency while maintaining service guarantees. Finally, AIBrix enhances system reliability with AI accelerator diagnostic tools, enabling automated failure detection and mock-up testing to improve fault resilience. AIBrix is available atthis https URL.

View on arXiv
Comments on this paper