ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16463
  4. Cited By
AnchorFormer: Differentiable Anchor Attention for Efficient Vision Transformer

AnchorFormer: Differentiable Anchor Attention for Efficient Vision Transformer

22 May 2025
Jiquan Shan
Junxiao Wang
Lifeng Zhao
Liang Cai
Hongyuan Zhang
Ioannis Liritzis
    ViT
ArXivPDFHTML

Papers citing "AnchorFormer: Differentiable Anchor Attention for Efficient Vision Transformer"

50 / 58 papers shown
Title
Adv-CPG: A Customized Portrait Generation Framework with Facial Adversarial Attacks
Junying Wang
Hongyuan Zhang
Yuan Yuan
AAML
PICV
104
2
0
11 Mar 2025
Enhance Vision-Language Alignment with Noise
Enhance Vision-Language Alignment with Noise
Sida Huang
Hongyuan Zhang
Xuelong Li
VLM
114
3
0
14 Dec 2024
Data Augmentation of Contrastive Learning is Estimating
  Positive-incentive Noise
Data Augmentation of Contrastive Learning is Estimating Positive-incentive Noise
Hongyuan Zhang
Yanchen Xu
Sida Huang
Xuelong Li
30
6
0
19 Aug 2024
CNN2GNN: How to Bridge CNN with GNN
CNN2GNN: How to Bridge CNN with GNN
Ziheng Jiao
Hongyuan Zhang
Xuelong Li
28
2
0
23 Apr 2024
RMT: Retentive Networks Meet Vision Transformers
RMT: Retentive Networks Meet Vision Transformers
Qihang Fan
Huaibo Huang
Mingrui Chen
Hongmin Liu
Ran He
ViT
70
78
0
20 Sep 2023
Variational Positive-incentive Noise: How Noise Benefits Models
Variational Positive-incentive Noise: How Noise Benefits Models
Hongyuan Zhang
Si-Ying Huang
Yubin Guo
Xuelong Li
45
9
0
13 Jun 2023
Decouple Graph Neural Networks: Train Multiple Simple GNNs
  Simultaneously Instead of One
Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One
Hongyuan Zhang
Yanan Zhu
Xuelong Li
50
11
0
20 Apr 2023
Slide-Transformer: Hierarchical Vision Transformer with Local
  Self-Attention
Slide-Transformer: Hierarchical Vision Transformer with Local Self-Attention
Xuran Pan
Tianzhu Ye
Zhuofan Xia
S. Song
Gao Huang
ViT
51
56
0
09 Apr 2023
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient
  Vision Transformers
Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers
Cong Wei
Brendan Duke
R. Jiang
P. Aarabi
Graham W. Taylor
Florian Shkurti
ViT
53
15
0
24 Mar 2023
BiFormer: Vision Transformer with Bi-Level Routing Attention
BiFormer: Vision Transformer with Bi-Level Routing Attention
Lei Zhu
Xinjiang Wang
Zhanghan Ke
Wayne Zhang
Rynson W. H. Lau
138
502
0
15 Mar 2023
Positive-incentive Noise
Positive-incentive Noise
Xuelong Li
26
33
0
19 Dec 2022
Castling-ViT: Compressing Self-Attention via Switching Towards
  Linear-Angular Attention at Vision Transformer Inference
Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention at Vision Transformer Inference
Haoran You
Yunyang Xiong
Xiaoliang Dai
Bichen Wu
Peizhao Zhang
Haoqi Fan
Peter Vajda
Yingyan Lin
49
33
0
18 Nov 2022
Grafting Vision Transformers
Grafting Vision Transformers
Jong Sung Park
Kumara Kahatapitiya
Donghyun Kim
Shivchander Sudalairaj
Quanfu Fan
Michael S. Ryoo
ViT
54
3
0
28 Oct 2022
Contrastive Language-Image Pre-Training with Knowledge Graphs
Contrastive Language-Image Pre-Training with Knowledge Graphs
Xuran Pan
Tianzhu Ye
Dongchen Han
S. Song
Gao Huang
VLM
CLIP
51
47
0
17 Oct 2022
Vision Transformer with Deformable Attention
Vision Transformer with Deformable Attention
Zhuofan Xia
Xuran Pan
S. Song
Li Erran Li
Gao Huang
ViT
55
464
0
03 Jan 2022
MViTv2: Improved Multiscale Vision Transformers for Classification and
  Detection
MViTv2: Improved Multiscale Vision Transformers for Classification and Detection
Yanghao Li
Chaoxia Wu
Haoqi Fan
K. Mangalam
Bo Xiong
Jitendra Malik
Christoph Feichtenhofer
ViT
115
683
0
02 Dec 2021
FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
FBNetV5: Neural Architecture Search for Multiple Tasks in One Run
Bichen Wu
Chaojian Li
Hang Zhang
Xiaoliang Dai
Peizhao Zhang
Matthew Yu
Jialiang Wang
Yingyan Lin
Peter Vajda
ViT
76
24
0
19 Nov 2021
PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices
PP-PicoDet: A Better Real-Time Object Detector on Mobile Devices
Guanghua Yu
Qinyao Chang
Wenyu Lv
Chang Xu
Cheng Cui
...
Baohua Lai
Qiwen Liu
Xiaoguang Hu
Dianhai Yu
Yanjun Ma
ObjD
62
121
0
01 Nov 2021
PP-LCNet: A Lightweight CPU Convolutional Neural Network
PP-LCNet: A Lightweight CPU Convolutional Neural Network
Cheng Cui
Tingquan Gao
Shengyun Wei
Yuning Du
Ruoyu Guo
...
X. Lv
Qiwen Liu
Xiaoguang Hu
Dianhai Yu
Yanjun Ma
ObjD
62
125
0
17 Sep 2021
YOLOX: Exceeding YOLO Series in 2021
YOLOX: Exceeding YOLO Series in 2021
Zheng Ge
Songtao Liu
Feng Wang
Zeming Li
Jian Sun
ObjD
99
4,025
0
18 Jul 2021
Combiner: Full Attention Transformer with Sparse Computation Cost
Combiner: Full Attention Transformer with Sparse Computation Cost
Hongyu Ren
H. Dai
Zihang Dai
Mengjiao Yang
J. Leskovec
Dale Schuurmans
Bo Dai
94
79
0
12 Jul 2021
CSWin Transformer: A General Vision Transformer Backbone with
  Cross-Shaped Windows
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows
Xiaoyi Dong
Jianmin Bao
Dongdong Chen
Weiming Zhang
Nenghai Yu
Lu Yuan
Dong Chen
B. Guo
ViT
119
969
0
01 Jul 2021
AutoFormer: Searching Transformers for Visual Recognition
AutoFormer: Searching Transformers for Visual Recognition
Minghao Chen
Houwen Peng
Jianlong Fu
Haibin Ling
ViT
76
262
0
01 Jul 2021
Focal Self-attention for Local-Global Interactions in Vision
  Transformers
Focal Self-attention for Local-Global Interactions in Vision Transformers
Jianwei Yang
Chunyuan Li
Pengchuan Zhang
Xiyang Dai
Bin Xiao
Lu Yuan
Jianfeng Gao
ViT
58
431
0
01 Jul 2021
Post-Training Quantization for Vision Transformer
Post-Training Quantization for Vision Transformer
Zhenhua Liu
Yunhe Wang
Kai Han
Siwei Ma
Wen Gao
ViT
MQ
83
332
0
27 Jun 2021
XCiT: Cross-Covariance Image Transformers
XCiT: Cross-Covariance Image Transformers
Alaaeldin El-Nouby
Hugo Touvron
Mathilde Caron
Piotr Bojanowski
Matthijs Douze
...
Ivan Laptev
Natalia Neverova
Gabriel Synnaeve
Jakob Verbeek
Hervé Jégou
ViT
108
507
0
17 Jun 2021
Segmenter: Transformer for Semantic Segmentation
Segmenter: Transformer for Semantic Segmentation
Robin Strudel
Ricardo Garcia Pinel
Ivan Laptev
Cordelia Schmid
ViT
120
1,442
0
12 May 2021
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
Ben Graham
Alaaeldin El-Nouby
Hugo Touvron
Pierre Stock
Armand Joulin
Hervé Jégou
Matthijs Douze
ViT
41
779
0
02 Apr 2021
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
288
21,051
0
25 Mar 2021
DeepViT: Towards Deeper Vision Transformer
DeepViT: Towards Deeper Vision Transformer
Daquan Zhou
Bingyi Kang
Xiaojie Jin
Linjie Yang
Xiaochen Lian
Zihang Jiang
Qibin Hou
Jiashi Feng
ViT
60
517
0
22 Mar 2021
Transformer in Transformer
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
359
1,544
0
27 Feb 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
666
28,659
0
26 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
438
3,660
0
24 Feb 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
273
6,657
0
23 Dec 2020
A Survey on Visual Transformer
A Survey on Visual Transformer
Kai Han
Yunhe Wang
Hanting Chen
Xinghao Chen
Jianyuan Guo
...
Chunjing Xu
Yixing Xu
Zhaohui Yang
Yiman Zhang
Dacheng Tao
ViT
102
2,174
0
23 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
312
40,217
0
22 Oct 2020
End to End Binarized Neural Networks for Text Classification
End to End Binarized Neural Networks for Text Classification
Harshil Jain
Akshat Agarwal
Kumar Shridhar
Denis Kleyko
MQ
40
27
0
11 Oct 2020
Big Bird: Transformers for Longer Sequences
Big Bird: Transformers for Longer Sequences
Manzil Zaheer
Guru Guruganesh
Kumar Avinava Dubey
Joshua Ainslie
Chris Alberti
...
Philip Pham
Anirudh Ravula
Qifan Wang
Li Yang
Amr Ahmed
VLM
478
2,051
0
28 Jul 2020
End-to-End Object Detection with Transformers
End-to-End Object Detection with Transformers
Nicolas Carion
Francisco Massa
Gabriel Synnaeve
Nicolas Usunier
Alexander Kirillov
Sergey Zagoruyko
ViT
3DV
PINN
275
12,847
0
26 May 2020
MobileDets: Searching for Object Detection Architectures for Mobile
  Accelerators
MobileDets: Searching for Object Detection Architectures for Mobile Accelerators
Yunyang Xiong
Hanxiao Liu
Suyog Gupta
Berkin Akin
Gabriel Bender
Yongzhe Wang
Pieter-Jan Kindermans
Mingxing Tan
Vikas Singh
Bo Chen
ObjD
34
133
0
30 Apr 2020
XtremeDistil: Multi-stage Distillation for Massive Multilingual Models
XtremeDistil: Multi-stage Distillation for Massive Multilingual Models
Subhabrata Mukherjee
Ahmed Hassan Awadallah
44
57
0
12 Apr 2020
DynaBERT: Dynamic BERT with Adaptive Width and Depth
DynaBERT: Dynamic BERT with Adaptive Width and Depth
Lu Hou
Zhiqi Huang
Lifeng Shang
Xin Jiang
Xiao Chen
Qun Liu
MQ
56
322
0
08 Apr 2020
Designing Network Design Spaces
Designing Network Design Spaces
Ilija Radosavovic
Raj Prateek Kosaraju
Ross B. Girshick
Kaiming He
Piotr Dollár
GNN
82
1,672
0
30 Mar 2020
EfficientDet: Scalable and Efficient Object Detection
EfficientDet: Scalable and Efficient Object Detection
Mingxing Tan
Ruoming Pang
Quoc V. Le
67
4,996
0
20 Nov 2019
NAT: Neural Architecture Transformer for Accurate and Compact
  Architectures
NAT: Neural Architecture Transformer for Accurate and Compact Architectures
Yong Guo
Yin Zheng
Mingkui Tan
Qi Chen
Jian Chen
P. Zhao
Junzhou Huang
88
84
0
31 Oct 2019
Fully Quantized Transformer for Machine Translation
Fully Quantized Transformer for Machine Translation
Gabriele Prato
Ella Charlaix
Mehdi Rezagholizadeh
MQ
20
70
0
17 Oct 2019
Structured Pruning of Large Language Models
Structured Pruning of Large Language Models
Ziheng Wang
Jeremy Wohlwend
Tao Lei
34
283
0
10 Oct 2019
Reducing Transformer Depth on Demand with Structured Dropout
Reducing Transformer Depth on Demand with Structured Dropout
Angela Fan
Edouard Grave
Armand Joulin
88
586
0
25 Sep 2019
Efficient 8-Bit Quantization of Transformer Neural Machine Language
  Translation Model
Efficient 8-Bit Quantization of Transformer Neural Machine Language Translation Model
Aishwarya Bhandare
Vamsi Sripathi
Deepthi Karkada
Vivek V. Menon
Sun Choi
Kushal Datta
V. Saletore
MQ
47
132
0
03 Jun 2019
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Mingxing Tan
Quoc V. Le
3DV
MedIm
81
17,950
0
28 May 2019
12
Next