ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.05773
  4. Cited By
CorrMAE: Pre-training Correspondence Transformers with Masked
  Autoencoder

CorrMAE: Pre-training Correspondence Transformers with Masked Autoencoder

9 June 2024
Tangfei Liao
Xiaoqin Zhang
Guobao Xiao
Min Li
Tao Wang
Mang Ye
ArXivPDFHTML

Papers citing "CorrMAE: Pre-training Correspondence Transformers with Masked Autoencoder"

4 / 4 papers shown
Title
VSFormer: Visual-Spatial Fusion Transformer for Correspondence Pruning
VSFormer: Visual-Spatial Fusion Transformer for Correspondence Pruning
Tangfei Liao
Xiaoqin Zhang
Li Zhao
Tao Wang
Guobao Xiao
ViT
12
7
0
14 Dec 2023
IMP: Iterative Matching and Pose Estimation with Adaptive Pooling
IMP: Iterative Matching and Pose Estimation with Adaptive Pooling
Fei Xue
Ignas Budvytis
R. Cipolla
25
13
0
28 Apr 2023
Traj-MAE: Masked Autoencoders for Trajectory Prediction
Traj-MAE: Masked Autoencoders for Trajectory Prediction
Hao Chen
Jiaze Wang
Kun Shao
Furui Liu
Jianye Hao
Chenyong Guan
Guangyong Chen
Pheng-Ann Heng
50
37
0
12 Mar 2023
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
1