193
v1v2 (latest)

Plug-and-play linear attention with provable guarantees for training-free image restoration

Abstract

Multi-head self-attention (MHSA) is a key building block in modern vision Transformers, yet its quadratic complexity in the number of tokens remains a major bottleneck for real-time and resource-constrained deployment. We present PnP-Nystra, a training-free Nyström-based linear attention module designed as a plug-and-play replacement for MHSA in {pretrained} image restoration Transformers, with provable kernel approximation error guarantees. PnP-Nystra integrates directly into window-based architectures such as SwinIR, Uformer, and Dehazeformer, yielding efficient inference without finetuning. Across denoising, deblurring, dehazing, and super-resolution on images, PnP-Nystra delivers 1.81.8--3.6×3.6\times speedups on an NVIDIA RTX 4090 GPU and 1.81.8--7×7\times speedups on CPU inference. Compared with the strongest training-free linear-attention baselines we evaluate, our method incurs the smallest quality drop and stays closest to the original model's outputs.

View on arXiv
Main:5 Pages
4 Figures
Bibliography:1 Pages
7 Tables
Comments on this paper