Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2108.05305
Cited By
ConvNets vs. Transformers: Whose Visual Representations are More Transferable?
11 August 2021
Hong-Yu Zhou
Chi-Ken Lu
Sibei Yang
Yizhou Yu
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"ConvNets vs. Transformers: Whose Visual Representations are More Transferable?"
16 / 16 papers shown
Title
A multi-scale vision transformer-based multimodal GeoAI model for mapping Arctic permafrost thaw
Wenwen Li
Chia-Yu Hsu
Sizhe Wang
Zhining Gu
Yili Yang
Brendan M. Rogers
A. Liljedahl
57
0
0
23 Apr 2025
CrisisViT: A Robust Vision Transformer for Crisis Image Classification
Zijun Long
R. McCreadie
Muhammad Imran
72
9
0
05 Jan 2024
Feature Extraction for Generative Medical Imaging Evaluation: New Evidence Against an Evolving Trend
M. Woodland
Austin Castelo
Mais Al Taie
Jessica Albuquerque Marques Silva
Mohamed Eltaher
...
Austin Castelo
Suprateek Kundu
Joshua P. Yung
Ankit B. Patel
Kristy K. Brock
MedIm
AAML
30
10
0
22 Nov 2023
MTLSegFormer: Multi-task Learning with Transformers for Semantic Segmentation in Precision Agriculture
D. Gonçalves
J. M. Junior
Pedro Zamboni
H. Pistori
Jonathan Li
Keiller Nogueira
W. Gonçalves
35
5
0
04 May 2023
Multimodal Hyperspectral Image Classification via Interconnected Fusion
Lu Huo
Jiahao Xia
Leijie Zhang
Haimin Zhang
Min Xu
17
2
0
02 Apr 2023
Self-attention in Vision Transformers Performs Perceptual Grouping, Not Attention
Paria Mehrani
John K. Tsotsos
25
24
0
02 Mar 2023
Finding Differences Between Transformers and ConvNets Using Counterfactual Simulation Testing
Nataniel Ruiz
Sarah Adel Bargal
Cihang Xie
Kate Saenko
Stan Sclaroff
ViT
23
5
0
29 Nov 2022
How Well Do Vision Transformers (VTs) Transfer To The Non-Natural Image Domain? An Empirical Study Involving Art Classification
Vincent Tonkes
M. Sabatelli
ViT
25
6
0
09 Aug 2022
Semi-Supervised Segmentation of Mitochondria from Electron Microscopy Images Using Spatial Continuity
Yunpeng Xiao
Youpeng Zhao
Ge Yang
17
3
0
06 Jun 2022
UniFormer: Unifying Convolution and Self-attention for Visual Recognition
Kunchang Li
Yali Wang
Junhao Zhang
Peng Gao
Guanglu Song
Yu Liu
Hongsheng Li
Yu Qiao
ViT
142
361
0
24 Jan 2022
UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning
Kunchang Li
Yali Wang
Peng Gao
Guanglu Song
Yu Liu
Hongsheng Li
Yu Qiao
ViT
31
238
0
12 Jan 2022
Exploiting Both Domain-specific and Invariant Knowledge via a Win-win Transformer for Unsupervised Domain Adaptation
Wen-hui Ma
Jinming Zhang
Shuang Li
Chi Harold Liu
Yulin Wang
Wei Li
ViT
16
11
0
25 Nov 2021
nnFormer: Interleaved Transformer for Volumetric Segmentation
Hong-Yu Zhou
J. Guo
Yinghao Zhang
Lequan Yu
Liansheng Wang
Yizhou Yu
ViT
MedIm
24
306
0
07 Sep 2021
TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation
Jinyu Yang
Jingjing Liu
N. Xu
Junzhou Huang
20
125
0
12 Aug 2021
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
291
10,216
0
16 Nov 2016
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,326
0
05 Nov 2016
1