ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.08254
  4. Cited By
BEiT: BERT Pre-Training of Image Transformers

BEiT: BERT Pre-Training of Image Transformers

15 June 2021
Hangbo Bao
Li Dong
Songhao Piao
Furu Wei
    ViT
ArXivPDFHTML

Papers citing "BEiT: BERT Pre-Training of Image Transformers"

50 / 1,788 papers shown
Title
Scaling Vision Transformers to Gigapixel Images via Hierarchical
  Self-Supervised Learning
Scaling Vision Transformers to Gigapixel Images via Hierarchical Self-Supervised Learning
Richard J. Chen
Chengkuan Chen
Yicong Li
Tiffany Y. Chen
A. Trister
Rahul G. Krishnan
Faisal Mahmood
ViT
MedIm
34
406
0
06 Jun 2022
Siamese Image Modeling for Self-Supervised Vision Representation
  Learning
Siamese Image Modeling for Self-Supervised Vision Representation Learning
Chenxin Tao
Xizhou Zhu
Weijie Su
Gao Huang
Bin Li
Jie Zhou
Yu Qiao
Xiaogang Wang
Jifeng Dai
SSL
37
94
0
02 Jun 2022
Transforming medical imaging with Transformers? A comparative review of
  key properties, current progresses, and future perspectives
Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives
Jun Li
Junyu Chen
Yucheng Tang
Ce Wang
Bennett A. Landman
S. K. Zhou
ViT
OOD
MedIm
21
20
0
02 Jun 2022
VL-BEiT: Generative Vision-Language Pretraining
VL-BEiT: Generative Vision-Language Pretraining
Hangbo Bao
Wenhui Wang
Li Dong
Furu Wei
VLM
10
45
0
02 Jun 2022
A Survey on Video Action Recognition in Sports: Datasets, Methods and
  Applications
A Survey on Video Action Recognition in Sports: Datasets, Methods and Applications
Fei Wu
Qingzhong Wang
Jian Bian
Haoyi Xiong
Ning Ding
Feixiang Lu
Junqing Cheng
Dejing Dou
AI4TS
24
52
0
02 Jun 2022
MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining
MaskOCR: Text Recognition with Masked Encoder-Decoder Pretraining
Pengyuan Lyu
Chengquan Zhang
Shanshan Liu
Meina Qiao
Yangliu Xu
Liang Wu
Kun Yao
Junyu Han
Errui Ding
Jingdong Wang
24
42
0
01 Jun 2022
Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction
Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction
Jun Chen
Ming Hu
Boyang Albert Li
Mohamed Elhoseiny
37
36
0
01 Jun 2022
CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping
CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping
Junlin Han
L. Petersson
Hongdong Li
Ian Reid
26
9
0
31 May 2022
Surface Analysis with Vision Transformers
Surface Analysis with Vision Transformers
Simon Dahan
Logan Z. J. Williams
Abdulah Fawaz
Daniel Rueckert
E. C. Robinson
ViT
MedIm
25
2
0
31 May 2022
Few-Shot Diffusion Models
Few-Shot Diffusion Models
Giorgio Giannone
Didrik Nielsen
Ole Winther
DiffM
183
49
0
30 May 2022
Exploring Advances in Transformers and CNN for Skin Lesion Diagnosis on
  Small Datasets
Exploring Advances in Transformers and CNN for Skin Lesion Diagnosis on Small Datasets
Leandro M. de Lima
R. Krohling
ViT
MedIm
28
10
0
30 May 2022
TubeFormer-DeepLab: Video Mask Transformer
TubeFormer-DeepLab: Video Mask Transformer
Dahun Kim
Jun Xie
Huiyu Wang
Siyuan Qiao
Qihang Yu
Hong-Seok Kim
Hartwig Adam
In So Kweon
Liang-Chieh Chen
ViT
MedIm
86
40
0
30 May 2022
Self-Supervised Visual Representation Learning with Semantic Grouping
Self-Supervised Visual Representation Learning with Semantic Grouping
Xin Wen
Bingchen Zhao
Anlin Zheng
X. Zhang
Xiaojuan Qi
SSL
117
71
0
30 May 2022
Self-Supervised Pre-training of Vision Transformers for Dense Prediction
  Tasks
Self-Supervised Pre-training of Vision Transformers for Dense Prediction Tasks
Jaonary Rabarisoa
Velentin Belissen
Florian Chabot
Q. C. Pham
VLM
ViT
SSL
MDE
15
2
0
30 May 2022
CHALLENGER: Training with Attribution Maps
CHALLENGER: Training with Attribution Maps
Christian Tomani
Daniel Cremers
8
1
0
30 May 2022
GMML is All you Need
GMML is All you Need
Sara Atito
Muhammad Awais
J. Kittler
ViT
VLM
44
18
0
30 May 2022
HiViT: Hierarchical Vision Transformer Meets Masked Image Modeling
HiViT: Hierarchical Vision Transformer Meets Masked Image Modeling
Xiaosong Zhang
Yunjie Tian
Wei Huang
QiXiang Ye
Qi Dai
Lingxi Xie
Qi Tian
58
26
0
30 May 2022
Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing
  Mechanisms in Sequence Learning
Temporal Latent Bottleneck: Synthesis of Fast and Slow Processing Mechanisms in Sequence Learning
Aniket Didolkar
Kshitij Gupta
Anirudh Goyal
Nitesh B. Gundavarapu
Alex Lamb
Nan Rosemary Ke
Yoshua Bengio
AI4CE
112
17
0
30 May 2022
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners
Feng Liang
Yangguang Li
Diana Marculescu
SSL
TPM
ViT
51
22
0
28 May 2022
MDMLP: Image Classification from Scratch on Small Datasets with MLP
MDMLP: Image Classification from Scratch on Small Datasets with MLP
Tianxu Lv
Chongyang Bai
Chaojie Wang
22
5
0
28 May 2022
A Closer Look at Self-Supervised Lightweight Vision Transformers
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian-jun Sun
Weiming Hu
ViT
67
41
0
28 May 2022
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud
  Pre-training
Point-M2AE: Multi-scale Masked Autoencoders for Hierarchical Point Cloud Pre-training
Renrui Zhang
Ziyu Guo
Rongyao Fang
Bingyan Zhao
Dong Wang
Yu Qiao
Hongsheng Li
Peng Gao
3DPC
178
244
0
28 May 2022
Object-wise Masked Autoencoders for Fast Pre-training
Object-wise Masked Autoencoders for Fast Pre-training
Jiantao Wu
Shentong Mo
ViT
OCL
17
15
0
28 May 2022
Is Lip Region-of-Interest Sufficient for Lipreading?
Is Lip Region-of-Interest Sufficient for Lipreading?
Jing-Xuan Zhang
Genshun Wan
Jia-Yu Pan
24
6
0
28 May 2022
Multimodal Masked Autoencoders Learn Transferable Representations
Multimodal Masked Autoencoders Learn Transferable Representations
Xinyang Geng
Hao Liu
Lisa Lee
Dale Schuurams
Sergey Levine
Pieter Abbeel
24
113
0
27 May 2022
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via
  Feature Distillation
Contrastive Learning Rivals Masked Image Modeling in Fine-tuning via Feature Distillation
Yixuan Wei
Han Hu
Zhenda Xie
Zheng-Wei Zhang
Yue Cao
Jianmin Bao
Dong Chen
B. Guo
CLIP
86
124
0
27 May 2022
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN
Siyuan Li
Di Wu
Fang Wu
Lei Shang
Stan.Z.Li
32
48
0
27 May 2022
AdaptFormer: Adapting Vision Transformers for Scalable Visual
  Recognition
AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition
Shoufa Chen
Chongjian Ge
Zhan Tong
Jiangliu Wang
Yibing Song
Jue Wang
Ping Luo
146
637
0
26 May 2022
Green Hierarchical Vision Transformer for Masked Image Modeling
Green Hierarchical Vision Transformer for Masked Image Modeling
Lang Huang
Shan You
Mingkai Zheng
Fei Wang
Chao Qian
T. Yamasaki
27
68
0
26 May 2022
Benchmarking of Deep Learning models on 2D Laminar Flow behind Cylinder
Benchmarking of Deep Learning models on 2D Laminar Flow behind Cylinder
Mritunjay Musale
Vaibhav Vasani
AI4CE
21
0
0
26 May 2022
HIRL: A General Framework for Hierarchical Image Representation Learning
HIRL: A General Framework for Hierarchical Image Representation Learning
Minghao Xu
Yuanfan Guo
Xuanyu Zhu
Jiawen Li
Zhenbang Sun
Jiangtao Tang
Yi Xu
Bingbing Ni
SSL
8
3
0
26 May 2022
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of
  Hierarchical Vision Transformers
MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers
Jihao Liu
Xin Huang
Jinliang Zheng
Yu Liu
Hongsheng Li
33
53
0
26 May 2022
Pretraining is All You Need for Image-to-Image Translation
Pretraining is All You Need for Image-to-Image Translation
Tengfei Wang
Ting Zhang
Bo Zhang
Hao Ouyang
Dong Chen
Qifeng Chen
Fang Wen
DiffM
189
178
0
25 May 2022
Masked Jigsaw Puzzle: A Versatile Position Embedding for Vision
  Transformers
Masked Jigsaw Puzzle: A Versatile Position Embedding for Vision Transformers
Bin Ren
Yahui Liu
Yue Song
Wei Bi
Rita Cucchiara
N. Sebe
Wei Wang
51
21
0
25 May 2022
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked
  Auto-Encoder
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
Shitao Xiao
Zheng Liu
Yingxia Shao
Zhao Cao
RALM
118
109
0
24 May 2022
VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal
  Document Classification
VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification
Souhail Bakkali
Zuheng Ming
Mickael Coustaty
Marccal Rusinol
O. R. Terrades
VLM
44
30
0
24 May 2022
Contrastive and Non-Contrastive Self-Supervised Learning Recover Global
  and Local Spectral Embedding Methods
Contrastive and Non-Contrastive Self-Supervised Learning Recover Global and Local Spectral Embedding Methods
Randall Balestriero
Yann LeCun
SSL
18
129
0
23 May 2022
Decoder Denoising Pretraining for Semantic Segmentation
Decoder Denoising Pretraining for Semantic Segmentation
Emmanuel B. Asiedu
Simon Kornblith
Ting Chen
Niki Parmar
Matthias Minderer
Mohammad Norouzi
AI4CE
193
26
0
23 May 2022
GraphMAE: Self-Supervised Masked Graph Autoencoders
GraphMAE: Self-Supervised Masked Graph Autoencoders
Zhenyu Hou
Xiao Liu
Yukuo Cen
Yuxiao Dong
Hongxia Yang
C. Wang
Jie Tang
SSL
45
545
0
22 May 2022
aSTDP: A More Biologically Plausible Learning
aSTDP: A More Biologically Plausible Learning
Shiyuan Li
9
1
0
22 May 2022
Scalable and Efficient Training of Large Convolutional Neural Networks
  with Differential Privacy
Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy
Zhiqi Bu
J. Mao
Shiyun Xu
131
47
0
21 May 2022
A Study on Transformer Configuration and Training Objective
A Study on Transformer Configuration and Training Objective
Fuzhao Xue
Jianghai Chen
Aixin Sun
Xiaozhe Ren
Zangwei Zheng
Xiaoxin He
Yongming Chen
Xin Jiang
Yang You
33
7
0
21 May 2022
Masterful: A Training Platform for Computer Vision Models
Masterful: A Training Platform for Computer Vision Models
S. Wookey
Yaoshiang Ho
Thomas D. Rikert
Juan David Gil Lopez
Juan Manuel Munoz Beancur
...
Ray Tawil
Aaron Sabin
Jack Lynch
Travis Harper
Nikhil Gajendrakumar
VLM
18
1
0
21 May 2022
Self-supervised 3D anatomy segmentation using self-distilled masked
  image transformer (SMIT)
Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT)
Jue Jiang
N. Tyagi
K. Tringale
C. Crane
H. Veeraraghavan
MedIm
36
34
0
20 May 2022
Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision
  Transformers with Locality
Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality
Xiang Li
Wenhai Wang
Lingfeng Yang
Jian Yang
107
73
0
20 May 2022
Masked Image Modeling with Denoising Contrast
Masked Image Modeling with Denoising Contrast
Kun Yi
Yixiao Ge
Xiaotong Li
Shusheng Yang
Dian Li
Jianping Wu
Ying Shan
Xiaohu Qie
VLM
30
51
0
19 May 2022
Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual
  Object Detection
Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection
Feng Liu
Xiaosong Zhang
Zhiliang Peng
Zonghao Guo
Fang Wan
Xian-Wei Ji
QiXiang Ye
ObjD
43
20
0
19 May 2022
Continual Pre-Training Mitigates Forgetting in Language and Vision
Continual Pre-Training Mitigates Forgetting in Language and Vision
Andrea Cossu
Tinne Tuytelaars
Antonio Carta
Lucia C. Passaro
Vincenzo Lomonaco
D. Bacciu
KELM
VLM
CLL
14
67
0
19 May 2022
TransTab: Learning Transferable Tabular Transformers Across Tables
TransTab: Learning Transferable Tabular Transformers Across Tables
Zifeng Wang
Jimeng Sun
LMTD
28
135
0
19 May 2022
Training Vision-Language Transformers from Captions
Training Vision-Language Transformers from Captions
Liangke Gui
Yingshan Chang
Qiuyuan Huang
Subhojit Som
Alexander G. Hauptmann
Jianfeng Gao
Yonatan Bisk
VLM
ViT
172
11
0
19 May 2022
Previous
123...313233343536
Next