Processing long contexts has become a critical capability for modern large language models (LLMs). However, serving long-context LLMs comes with significant inference costs due to the high memory overhead of the key-value (KV) cache. Existing work leverages dynamic sparse attention algorithms (DSAes) to mitigate the KV cache overhead, but these algorithms rely on top- KV cache selection, which results in a trade-off between accuracy and efficiency. A larger improves accuracy but decreases efficiency, while a smaller boosts efficiency but compromises accuracy. To overcome this trade-off, this paper presents PSA, a rogressive parse ttention mechanism that integrates algorithmic innovations with system co-design to achieve both high inference accuracy and improved efficiency in LLM serving. The PSA algorithm adaptively adjusts the KV cache budget of different tokens and layers according to their real attention weight distributions, rather than relying on a fixed budget . This enables high accuracy while minimizing KV cache usage. To further enhance execution efficiency, we introduce a pipelined iteration scheme that reduces CPU-GPU interleaving and synchronization overhead during PSA computation. Additionally, we implement unified GPU memory management that optimizes PSA's memory utilization by accounting for uneven memory requirements across different model layers. Extensive experimental results demonstrate that PSA reduces KV cache usage for attention computation by up to 2.4 and 8.8, and increases end-to-end serving throughput by up to 1.4 and 2.0, compared to state-of-the-art DSAes and systems without sparse attention, respectively.
View on arXiv@article{zhou2025_2503.00392, title={ Progressive Sparse Attention: Algorithm and System Co-design for Efficient Attention in LLM Serving }, author={ Qihui Zhou and Peiqi Yin and Pengfei Zuo and James Cheng }, journal={arXiv preprint arXiv:2503.00392}, year={ 2025 } }