ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.14030
  4. Cited By
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

Swin Transformer: Hierarchical Vision Transformer using Shifted Windows

25 March 2021
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng-Wei Zhang
Stephen Lin
B. Guo
    ViT
ArXivPDFHTML

Papers citing "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows"

48 / 2,048 papers shown
Title
CBNet: A Composite Backbone Network Architecture for Object Detection
CBNet: A Composite Backbone Network Architecture for Object Detection
Tingting Liang
Xiao Chu
Yudong Liu
Yongtao Wang
Zhi Tang
Wei Chu
Jingdong Chen
Haibin Ling
ObjD
13
161
0
01 Jul 2021
Simple Training Strategies and Model Scaling for Object Detection
Simple Training Strategies and Model Scaling for Object Detection
Xianzhi Du
Barret Zoph
Wei-Chih Hung
Tsung-Yi Lin
ObjD
25
40
0
30 Jun 2021
Looking Outside the Window: Wide-Context Transformer for the Semantic
  Segmentation of High-Resolution Remote Sensing Images
Looking Outside the Window: Wide-Context Transformer for the Semantic Segmentation of High-Resolution Remote Sensing Images
L. Ding
Dong Lin
Shaofu Lin
Jing Zhang
Xiaojie Cui
Yuebin Wang
H. Tang
Lorenzo Bruzzone
ViT
19
96
0
29 Jun 2021
Rethinking Token-Mixing MLP for MLP-based Vision Backbone
Rethinking Token-Mixing MLP for MLP-based Vision Backbone
Tan Yu
Xu Li
Yunfeng Cai
Mingming Sun
Ping Li
38
26
0
28 Jun 2021
K-Net: Towards Unified Image Segmentation
K-Net: Towards Unified Image Segmentation
Wenwei Zhang
Jiangmiao Pang
Kai-xiang Chen
Chen Change Loy
ISeg
13
356
0
28 Jun 2021
Probing Inter-modality: Visual Parsing with Self-Attention for
  Vision-Language Pre-training
Probing Inter-modality: Visual Parsing with Self-Attention for Vision-Language Pre-training
Hongwei Xue
Yupan Huang
Bei Liu
Houwen Peng
Jianlong Fu
Houqiang Li
Jiebo Luo
22
88
0
25 Jun 2021
Autoformer: Decomposition Transformers with Auto-Correlation for
  Long-Term Series Forecasting
Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting
Haixu Wu
Jiehui Xu
Jianmin Wang
Mingsheng Long
AI4TS
12
2,068
0
24 Jun 2021
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision
  Transformers
IA-RED2^22: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Rameswar Panda
Yifan Jiang
Zhangyang Wang
Rogerio Feris
A. Oliva
VLM
ViT
23
153
0
23 Jun 2021
Vision Permutator: A Permutable MLP-Like Architecture for Visual
  Recognition
Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition
Qibin Hou
Zihang Jiang
Li-xin Yuan
Mingg-Ming Cheng
Shuicheng Yan
Jiashi Feng
ViT
MLLM
22
204
0
23 Jun 2021
P2T: Pyramid Pooling Transformer for Scene Understanding
P2T: Pyramid Pooling Transformer for Scene Understanding
Yu-Huan Wu
Yun-Hai Liu
Xin Zhan
Mingg-Ming Cheng
ViT
24
218
0
22 Jun 2021
Tracking Instances as Queries
Tracking Instances as Queries
Shusheng Yang
Yuxin Fang
Xinggang Wang
Yu Li
Ying Shan
Bin Feng
Wenyu Liu
22
10
0
22 Jun 2021
TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?
TokenLearner: What Can 8 Learned Tokens Do for Images and Videos?
Michael S. Ryoo
A. Piergiovanni
Anurag Arnab
Mostafa Dehghani
A. Angelova
ViT
19
127
0
21 Jun 2021
MSN: Efficient Online Mask Selection Network for Video Instance
  Segmentation
MSN: Efficient Online Mask Selection Network for Video Instance Segmentation
Vidit Goel
Jiachen Li
Shubhika Garg
Harsh Maheshwari
Humphrey Shi
17
7
0
19 Jun 2021
How to train your ViT? Data, Augmentation, and Regularization in Vision
  Transformers
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Andreas Steiner
Alexander Kolesnikov
Xiaohua Zhai
Ross Wightman
Jakob Uszkoreit
Lucas Beyer
ViT
29
610
0
18 Jun 2021
Efficient Self-supervised Vision Transformers for Representation
  Learning
Efficient Self-supervised Vision Transformers for Representation Learning
Chunyuan Li
Jianwei Yang
Pengchuan Zhang
Mei Gao
Bin Xiao
Xiyang Dai
Lu Yuan
Jianfeng Gao
ViT
28
208
0
17 Jun 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
24
807
0
14 Jun 2021
MST: Masked Self-Supervised Transformer for Visual Representation
MST: Masked Self-Supervised Transformer for Visual Representation
Zhaowen Li
Zhiyang Chen
Fan Yang
Wei Li
Yousong Zhu
...
Rui Deng
Liwei Wu
Rui Zhao
Ming Tang
Jinqiao Wang
ViT
24
161
0
10 Jun 2021
Do Transformers Really Perform Bad for Graph Representation?
Do Transformers Really Perform Bad for Graph Representation?
Chengxuan Ying
Tianle Cai
Shengjie Luo
Shuxin Zheng
Guolin Ke
Di He
Yanming Shen
Tie-Yan Liu
GNN
21
431
0
09 Jun 2021
Large-scale Unsupervised Semantic Segmentation
Large-scale Unsupervised Semantic Segmentation
Shangqi Gao
Zhong-Yu Li
Ming-Hsuan Yang
Mingg-Ming Cheng
Junwei Han
Philip H. S. Torr
UQCV
25
84
0
06 Jun 2021
Signal Transformer: Complex-valued Attention and Meta-Learning for
  Signal Recognition
Signal Transformer: Complex-valued Attention and Meta-Learning for Signal Recognition
Yihong Dong
Ying Peng
Muqiao Yang
Songtao Lu
Qingjiang Shi
38
9
0
05 Jun 2021
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model
Jiangning Zhang
Chao Xu
Jian Li
Wenzhou Chen
Yabiao Wang
Ying Tai
Shuo Chen
Chengjie Wang
Feiyue Huang
Yong Liu
25
22
0
31 May 2021
Dual-stream Network for Visual Recognition
Dual-stream Network for Visual Recognition
Mingyuan Mao
Renrui Zhang
Honghui Zheng
Peng Gao
Teli Ma
Yan Peng
Errui Ding
Baochang Zhang
Shumin Han
ViT
16
63
0
31 May 2021
KVT: k-NN Attention for Boosting Vision Transformers
KVT: k-NN Attention for Boosting Vision Transformers
Pichao Wang
Xue Wang
F. Wang
Ming Lin
Shuning Chang
Hao Li
R. L. Jin
ViT
32
105
0
28 May 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
251
618
0
21 May 2021
You Only Learn One Representation: Unified Network for Multiple Tasks
You Only Learn One Representation: Unified Network for Multiple Tasks
Chien-Yao Wang
I-Hau Yeh
Hongpeng Liao
SSL
FedML
18
473
0
10 May 2021
MOTR: End-to-End Multiple-Object Tracking with Transformer
MOTR: End-to-End Multiple-Object Tracking with Transformer
Fangao Zeng
Bin Dong
Cheng Chen
Tiancai Wang
X. Zhang
Yichen Wei
VOT
15
495
0
07 May 2021
A State-of-the-art Survey of Object Detection Techniques in
  Microorganism Image Analysis: From Classical Methods to Deep Learning
  Approaches
A State-of-the-art Survey of Object Detection Techniques in Microorganism Image Analysis: From Classical Methods to Deep Learning Approaches
Pingli Ma
Chen Li
M. Rahaman
Yudong Yao
Jiawei Zhang
Shuojia Zou
Xin Zhao
M. Grzegorzek
24
60
0
07 May 2021
Beyond Self-attention: External Attention using Two Linear Layers for
  Visual Tasks
Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks
Meng-Hao Guo
Zheng-Ning Liu
Tai-Jiang Mu
Shimin Hu
18
467
0
05 May 2021
Attention for Image Registration (AiR): an unsupervised Transformer
  approach
Attention for Image Registration (AiR): an unsupervised Transformer approach
Zihao W. Wang
H. Delingette
ViT
MedIm
25
7
0
05 May 2021
Instances as Queries
Instances as Queries
Yuxin Fang
Shusheng Yang
Xinggang Wang
Yu Li
Chen Fang
Ying Shan
Bin Feng
Wenyu Liu
ISeg
28
254
0
05 May 2021
TransHash: Transformer-based Hamming Hashing for Efficient Image
  Retrieval
TransHash: Transformer-based Hamming Hashing for Efficient Image Retrieval
Yongbiao Chen
Shenmin Zhang
Fangxin Liu
Zhigang Chang
Mang Ye
Zhengwei Qi Shanghai Jiao Tong University
ViT
20
48
0
05 May 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,592
0
04 May 2021
AGMB-Transformer: Anatomy-Guided Multi-Branch Transformer Network for
  Automated Evaluation of Root Canal Therapy
AGMB-Transformer: Anatomy-Guided Multi-Branch Transformer Network for Automated Evaluation of Root Canal Therapy
Yunxiang Li
G. Zeng
Yifan Zhang
Jun Wang
Qianni Zhang
...
Neng Xia
Ruizi Peng
Kai Tang
Yaqi Wang
Shuai Wang
MedIm
AI4CE
90
28
0
02 May 2021
Vision Transformers with Patch Diversification
Vision Transformers with Patch Diversification
Chengyue Gong
Dilin Wang
Meng Li
Vikas Chandra
Qiang Liu
ViT
35
62
0
26 Apr 2021
Visformer: The Vision-friendly Transformer
Visformer: The Vision-friendly Transformer
Zhengsu Chen
Lingxi Xie
Jianwei Niu
Xuefeng Liu
Longhui Wei
Qi Tian
ViT
109
208
0
26 Apr 2021
Multiscale Vision Transformers
Multiscale Vision Transformers
Haoqi Fan
Bo Xiong
K. Mangalam
Yanghao Li
Zhicheng Yan
Jitendra Malik
Christoph Feichtenhofer
ViT
19
1,215
0
22 Apr 2021
All Tokens Matter: Token Labeling for Training Better Vision
  Transformers
All Tokens Matter: Token Labeling for Training Better Vision Transformers
Zihang Jiang
Qibin Hou
Li-xin Yuan
Daquan Zhou
Yujun Shi
Xiaojie Jin
Anran Wang
Jiashi Feng
ViT
10
203
0
22 Apr 2021
Bridging Global Context Interactions for High-Fidelity Image Completion
Bridging Global Context Interactions for High-Fidelity Image Completion
Chuanxia Zheng
Tat-Jen Cham
Jianfei Cai
Dinh Q. Phung
ViT
17
78
0
02 Apr 2021
On the Adversarial Robustness of Vision Transformers
On the Adversarial Robustness of Vision Transformers
Rulin Shao
Zhouxing Shi
Jinfeng Yi
Pin-Yu Chen
Cho-Jui Hsieh
ViT
18
137
0
29 Mar 2021
Multi-Scale Vision Longformer: A New Vision Transformer for
  High-Resolution Image Encoding
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding
Pengchuan Zhang
Xiyang Dai
Jianwei Yang
Bin Xiao
Lu Yuan
Lei Zhang
Jianfeng Gao
ViT
21
328
0
29 Mar 2021
Transformer in Transformer
Transformer in Transformer
Kai Han
An Xiao
Enhua Wu
Jianyuan Guo
Chunjing Xu
Yunhe Wang
ViT
282
1,518
0
27 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
263
3,604
0
24 Feb 2021
Deep Texture-Aware Features for Camouflaged Object Detection
Deep Texture-Aware Features for Camouflaged Object Detection
Jingjing Ren
Xiaowei Hu
Lei Zhu
Xuemiao Xu
Yangyang Xu
Weiming Wang
Zijun Deng
Pheng-Ann Heng
ObjD
91
106
0
05 Feb 2021
Bottleneck Transformers for Visual Recognition
Bottleneck Transformers for Visual Recognition
A. Srinivas
Tsung-Yi Lin
Niki Parmar
Jonathon Shlens
Pieter Abbeel
Ashish Vaswani
SLR
270
973
0
27 Jan 2021
Simple Copy-Paste is a Strong Data Augmentation Method for Instance
  Segmentation
Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Golnaz Ghiasi
Yin Cui
A. Srinivas
Rui Qian
Tsung-Yi Lin
E. D. Cubuk
Quoc V. Le
Barret Zoph
ISeg
223
962
0
13 Dec 2020
Efficient Transformers: A Survey
Efficient Transformers: A Survey
Yi Tay
Mostafa Dehghani
Dara Bahri
Donald Metzler
VLM
66
1,097
0
14 Sep 2020
LIP: Local Importance-based Pooling
LIP: Local Importance-based Pooling
Ziteng Gao
Limin Wang
Gangshan Wu
FAtt
19
94
0
12 Aug 2019
Semantic Understanding of Scenes through the ADE20K Dataset
Semantic Understanding of Scenes through the ADE20K Dataset
Bolei Zhou
Hang Zhao
Xavier Puig
Tete Xiao
Sanja Fidler
Adela Barriuso
Antonio Torralba
SSeg
249
1,821
0
18 Aug 2016
Previous
123...394041