Recently, vision transformers (ViTs) have achieved excellent performance on vision tasks by measuring the global self-attention among the image patches. Given patches, they will have quadratic complexity such as and the time cost is high when splitting the input image with a small granularity. Meanwhile, the pivotal information is often randomly gathered in a few regions of an input image, some tokens may not be helpful for the downstream tasks. To handle this problem, we introduce an anchor-based efficient vision transformer (AnchorFormer), which employs the anchor tokens to learn the pivotal information and accelerate the inference. Firstly, by estimating the bipartite attention between the anchors and tokens, the complexity will be reduced from to , where is an anchor number and . Notably, by representing the anchors with the neurons in a neural layer, we can differentiable learn these distributions and approximate global self-attention through the Markov process. Moreover, we extend the proposed model to three downstream tasks including classification, detection, and segmentation. Extensive experiments show the effectiveness of our AnchorFormer, e.g., achieving up to a 9.0% higher accuracy or 46.7% FLOPs reduction on ImageNet classification, 81.3% higher mAP on COCO detection under comparable FLOPs, as compared to the current baselines.
View on arXiv@article{shan2025_2505.16463, title={ AnchorFormer: Differentiable Anchor Attention for Efficient Vision Transformer }, author={ Jiquan Shan and Junxiao Wang and Lifeng Zhao and Liang Cai and Hongyuan Zhang and Ioannis Liritzis }, journal={arXiv preprint arXiv:2505.16463}, year={ 2025 } }