113

LLM Cache Bandit Revisited: Addressing Query Heterogeneity for Cost-Effective LLM Inference

Main:9 Pages
2 Figures
Bibliography:3 Pages
2 Tables
Abstract

This paper revisits the LLM cache bandit problem, with a special focus on addressing the query heterogeneity for cost-effective LLM inference. Previous works often assume uniform query sizes. Heterogeneous query sizes introduce a combinatorial structure for cache selection, making the cache replacement process more computationally and statistically challenging. We treat optimal cache selection as a knapsack problem and employ an accumulation-based strategy to effectively balance computational overhead and cache updates. In theoretical analysis, we prove that the regret of our algorithm achieves an O(MNT)O(\sqrt{MNT}) bound, improving the coefficient of MN\sqrt{MN} compared to the O(MNT)O(MN\sqrt{T}) result in Berkeley, where NN is the total number of queries and MM is the cache size. Additionally, we also provide a problem-dependent bound, which was absent in previous works. The experiment rely on real-world data show that our algorithm reduces the total cost by approximately 12\%.

View on arXiv
Comments on this paper