27

Scout Before You Attend: Sketch-and-Walk Sparse Attention for Efficient LLM Inference

Hoang Anh Duy Le
Sahil Joshi
Zeyu Yang
Zhaozhuo Xu
Anshumali Shrivastava
Main:27 Pages
7 Figures
Bibliography:5 Pages
8 Tables
Abstract

Self-attention dominates the computational and memory cost of long-context LLM inference across both prefill and decode phases. To address this challenge, we introduce Sketch&Walk Attention, a training-free sparse attention method that determines sparsity with lightweight sketches and deterministic walk. Sketch&Walk applies Hadamard sketching to get inexpensive approximations of attention scores, then aggregates these estimates across layers via a walk mechanism that captures attention influence beyond direct interactions between tokens. The accumulated walk scores are used to select top-k attention blocks, enabling dynamic sparsity with a single training-free algorithm that applies uniformly to both the prefill and decode phases, together with custom sparse attention kernels. Across a wide range of models and tasks, Sketch&Walk maintains near-lossless accuracy at 20% attention density and can slightly outperform dense attention in some settings, while achieving up to 6x inference speedup.

View on arXiv
Comments on this paper