220
v1v2v3v4 (latest)

Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters

Main:15 Pages
5 Figures
Bibliography:2 Pages
5 Tables
Appendix:5 Pages
Abstract

Our formulation reveals that the reduction across the sequence axis can be efficiently computed in parallel through a tree reduction. Our algorithm, called Tree Attention, for parallelizing exact attention computation across multiple GPUs enables cross-device decoding to be performed asymptotically faster (up to 8x faster in our experiments) than state-of-the-art approaches such as Ring Attention, while also requiring significantly less communication volume and incurring 2x less peak memory. We demonstrate that Tree Attention speeds up decoding up to 4x on Llama 3.1-8B and can be applied to a variety of hardware and networking setups such as H100 DGX nodes, AMD MI300x nodes, and PCIe connected NVIDIA RTX 4090s. Our code is publicly available here:this https URL

View on arXiv
Comments on this paper