Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.05773
Cited By
CorrMAE: Pre-training Correspondence Transformers with Masked Autoencoder
9 June 2024
Tangfei Liao
Xiaoqin Zhang
Guobao Xiao
Min Li
Tao Wang
Mang Ye
Re-assign community
ArXiv
PDF
HTML
Papers citing
"CorrMAE: Pre-training Correspondence Transformers with Masked Autoencoder"
4 / 4 papers shown
Title
VSFormer: Visual-Spatial Fusion Transformer for Correspondence Pruning
Tangfei Liao
Xiaoqin Zhang
Li Zhao
Tao Wang
Guobao Xiao
ViT
12
7
0
14 Dec 2023
IMP: Iterative Matching and Pose Estimation with Adaptive Pooling
Fei Xue
Ignas Budvytis
R. Cipolla
28
13
0
28 Apr 2023
Traj-MAE: Masked Autoencoders for Trajectory Prediction
Hao Chen
Jiaze Wang
Kun Shao
Furui Liu
Jianye Hao
Chenyong Guan
Guangyong Chen
Pheng-Ann Heng
50
37
0
12 Mar 2023
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
1