36

\nablaNABLA: Neighborhood Adaptive Block-Level Attention

Dmitrii Mikhailov
Aleksey Letunovskiy
Maria Kovaleva
Vladimir Arkhipkin
Vladimir Korviakov
Vladimir Polovnikov
Viacheslav Vasilev
Evelina Sidorova
Denis Dimitrov
Main:12 Pages
10 Figures
Bibliography:3 Pages
4 Tables
Appendix:6 Pages
Abstract

Recent progress in transformer-based architectures has demonstrated remarkable success in video generation tasks. However, the quadratic complexity of full attention mechanisms remains a critical bottleneck, particularly for high-resolution and long-duration video sequences. In this paper, we propose NABLA, a novel Neighborhood Adaptive Block-Level Attention mechanism that dynamically adapts to sparsity patterns in video diffusion transformers (DiTs). By leveraging block-wise attention with adaptive sparsity-driven threshold, NABLA reduces computational overhead while preserving generative quality. Our method does not require custom low-level operator design and can be seamlessly integrated with PyTorch's Flex Attention operator. Experiments demonstrate that NABLA achieves up to 2.7x faster training and inference compared to baseline almost without compromising quantitative metrics (CLIP score, VBench score, human evaluation score) and visual quality drop. The code and model weights are available here:this https URL

View on arXiv
Comments on this paper