239
v1v2 (latest)

SlimCaching: Edge Caching of Mixture-of-Experts for Distributed Inference

Main:15 Pages
11 Figures
Bibliography:2 Pages
Abstract

Mixture-of-Experts (MoE) models improve the scalability of large language models (LLMs) by activating only a small subset of relevant experts per input. However, the sheer number of expert networks in an MoE model introduces a significant storage burden for an edge device. To address this challenge, we consider a scenario where experts are dispersed across an edge network for distributed inference. Based on the popular Top-KK expert selection strategy, we formulate a latency minimization problem by optimizing expert caching on edge servers under storage constraints. When K=1K=1, the problem reduces to a monotone submodular maximization problem with knapsack constraints, for which we design a greedy-based algorithm with a (11/e)(1 - 1/e)-approximation guarantee. For the general case where K1K \geq 1, expert co-activation within the same MoE layer introduces non-submodularity, which renders greedy methods ineffective. To tackle this issue, we propose a successive greedy decomposition method to decompose the original problem into a series of subproblems, with each being solved by a dynamic programming approach. Furthermore, we design an accelerated algorithm based on the max-convolution technique to obtain the approximate solution with a provable guarantee in polynomial time. Simulation results on various MoE models demonstrate that our method significantly reduces inference latency compared to existing baselines.

View on arXiv
Comments on this paper