ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.05473
  4. Cited By
SEPT: Towards Scalable and Efficient Visual Pre-Training

SEPT: Towards Scalable and Efficient Visual Pre-Training

11 December 2022
Yiqi Lin
Huabin Zheng
Huaping Zhong
Jinjing Zhu
Weijia Li
Conghui He
Lin Wang
ArXivPDFHTML

Papers citing "SEPT: Towards Scalable and Efficient Visual Pre-Training"

5 / 5 papers shown
Title
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
305
7,434
0
11 Nov 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
308
5,773
0
29 Apr 2021
Meta Pseudo Labels
Meta Pseudo Labels
Hieu H. Pham
Zihang Dai
Qizhe Xie
Minh-Thang Luong
Quoc V. Le
VLM
250
656
0
23 Mar 2020
Improved Baselines with Momentum Contrastive Learning
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
264
3,369
0
09 Mar 2020
Borrowing Treasures from the Wealthy: Deep Transfer Learning through
  Selective Joint Fine-tuning
Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning
Weifeng Ge
Yizhou Yu
86
233
0
28 Feb 2017
1