Twilight: Adaptive Attention Sparsity with Hierarchical Top- Pruning

Leveraging attention sparsity to accelerate long-context large language models (LLMs) has been a hot research topic. However, current algorithms such as sparse attention or key-value (KV) cache compression tend to use a fixed budget, which presents a significant challenge during deployment because it fails to account for the dynamic nature of real-world scenarios, where the optimal balance between accuracy and efficiency can vary greatly. In this paper, we find that borrowing top- sampling (nucleus sampling) to sparse attention can surprisingly achieve adaptive budgeting. Based on this, we propose Twilight, a framework to bring adaptive sparsity to any existing sparse attention algorithm without sacrificing their accuracy. Empirical results show that Twilight can adaptively prune at most 98% of redundant tokens, leading to acceleration in self-attention operations and acceleration in end-to-end per token latency in long context LLM decoding.
View on arXiv@article{lin2025_2502.02770, title={ Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning }, author={ Chaofan Lin and Jiaming Tang and Shuo Yang and Hanshuo Wang and Tian Tang and Boyu Tian and Ion Stoica and Song Han and Mingyu Gao }, journal={arXiv preprint arXiv:2502.02770}, year={ 2025 } }