ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.01580
  4. Cited By
Better plain ViT baselines for ImageNet-1k

Better plain ViT baselines for ImageNet-1k

3 May 2022
Lucas Beyer
Xiaohua Zhai
Alexander Kolesnikov
    ViT
    VLM
ArXivPDFHTML

Papers citing "Better plain ViT baselines for ImageNet-1k"

18 / 68 papers shown
Title
RECLIP: Resource-efficient CLIP by Training with Small Images
RECLIP: Resource-efficient CLIP by Training with Small Images
Runze Li
Dahun Kim
B. Bhanu
Weicheng Kuo
VLM
CLIP
22
12
0
12 Apr 2023
A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
A Study of Autoregressive Decoders for Multi-Tasking in Computer Vision
Lucas Beyer
Bo Wan
Gagan Madan
Filip Pavetić
Andreas Steiner
...
Emanuele Bugliarello
Xiao Wang
Qihang Yu
Liang-Chieh Chen
Xiaohua Zhai
49
8
0
30 Mar 2023
Sigmoid Loss for Language Image Pre-Training
Sigmoid Loss for Language Image Pre-Training
Xiaohua Zhai
Basil Mustafa
Alexander Kolesnikov
Lucas Beyer
CLIP
VLM
19
931
0
27 Mar 2023
Enhancing Multiple Reliability Measures via Nuisance-extended
  Information Bottleneck
Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck
Jongheon Jeong
Sihyun Yu
Hankook Lee
Jinwoo Shin
AAML
36
0
0
24 Mar 2023
Symbolic Synthesis of Neural Networks
Symbolic Synthesis of Neural Networks
Eli Whitehouse
14
0
0
06 Mar 2023
Dual PatchNorm
Dual PatchNorm
Manoj Kumar
Mostafa Dehghani
N. Houlsby
UQCV
ViT
11
10
0
02 Feb 2023
Adaptive Computation with Elastic Input Sequence
Adaptive Computation with Elastic Input Sequence
Fuzhao Xue
Valerii Likhosherstov
Anurag Arnab
N. Houlsby
Mostafa Dehghani
Yang You
27
18
0
30 Jan 2023
Joint Training of Deep Ensembles Fails Due to Learner Collusion
Joint Training of Deep Ensembles Fails Due to Learner Collusion
Alan Jeffares
Tennison Liu
Jonathan Crabbé
M. Schaar
FedML
29
15
0
26 Jan 2023
FlexiViT: One Model for All Patch Sizes
FlexiViT: One Model for All Patch Sizes
Lucas Beyer
Pavel Izmailov
Alexander Kolesnikov
Mathilde Caron
Simon Kornblith
Xiaohua Zhai
Matthias Minderer
Michael Tschannen
Ibrahim M. Alabdulmohsin
Filip Pavetić
VLM
28
89
0
15 Dec 2022
Spikeformer: A Novel Architecture for Training High-Performance
  Low-Latency Spiking Neural Network
Spikeformer: A Novel Architecture for Training High-Performance Low-Latency Spiking Neural Network
Yudong Li
Yunlin Lei
Xu Yang
16
26
0
19 Nov 2022
VeLO: Training Versatile Learned Optimizers by Scaling Up
VeLO: Training Versatile Learned Optimizers by Scaling Up
Luke Metz
James Harrison
C. Freeman
Amil Merchant
Lucas Beyer
...
Naman Agrawal
Ben Poole
Igor Mordatch
Adam Roberts
Jascha Narain Sohl-Dickstein
21
60
0
17 Nov 2022
Exploring Long-Sequence Masked Autoencoders
Exploring Long-Sequence Masked Autoencoders
Ronghang Hu
Shoubhik Debnath
Saining Xie
Xinlei Chen
8
18
0
13 Oct 2022
A new hope for network model generalization
A new hope for network model generalization
Alexander Dietmüller
Siddhant Ray
Romain Jacob
Laurent Vanbever
AI4CE
29
38
0
12 Jul 2022
No Reason for No Supervision: Improved Generalization in Supervised
  Models
No Reason for No Supervision: Improved Generalization in Supervised Models
Mert Bulent Sariyildiz
Yannis Kalantidis
Alahari Karteek
Diane Larlus
SSL
OOD
LRM
14
28
0
30 Jun 2022
Learning to Estimate Shapley Values with Vision Transformers
Learning to Estimate Shapley Values with Vision Transformers
Ian Covert
Chanwoo Kim
Su-In Lee
FAtt
17
34
0
10 Jun 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,412
0
11 Nov 2021
ResNet strikes back: An improved training procedure in timm
ResNet strikes back: An improved training procedure in timm
Ross Wightman
Hugo Touvron
Hervé Jégou
AI4TS
207
484
0
01 Oct 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
239
2,592
0
04 May 2021
Previous
12