19

Theoretically Optimal Attention/FFN Ratios in Disaggregated LLM Serving

Chendong Song
Meixuan Wang
Hang Zhou
Hong Liang
Yuan Lyu
Zixi Chen
Yuwei Fan
Zijie Zhou
Main:7 Pages
9 Figures
Bibliography:3 Pages
6 Tables
Appendix:4 Pages
Abstract

Attention-FFN disaggregation (AFD) is an emerging architecture for LLM decoding that separates state-heavy, KV-cache-dominated Attention computation from stateless, compute-intensive FFN computation, connected by per-step communication. While AFD enables independent scaling of memory and compute resources, its performance is highly sensitive to the Attention/FFN provisioning ratio: mis-sizing induces step-level blocking and costly device idle time. We develop a tractable analytical framework for sizing AFD bundles in an rrA-11F topology, where the key difficulty is that Attention-side work is nonstationary-token context grows and requests are continuously replenished with random lengths-while FFN work is stable given the aggregated batch. Using a probabilistic workload model, we derive closed-form rules for the optimal A/F ratio that maximize average throughput per instance across the system. A trace-calibrated AFD simulator validates the theory: across workloads, the theoretical optimal A/F ratio matches the simulation-optimal within 10%, and consistently reduces idle time.

View on arXiv
Comments on this paper