Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.10901
Cited By
Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning
22 September 2022
Manuel Goulão
Arlindo L. Oliveira
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning"
7 / 7 papers shown
Title
Uncovering RL Integration in SSL Loss: Objective-Specific Implications for Data-Efficient RL
Ömer Veysel Çağatan
Barış Akgün
OffRL
34
0
0
22 Oct 2024
Masked Feature Modelling: Feature Masking for the Unsupervised Pre-training of a Graph Attention Network Block for Bottom-up Video Event Recognition
Dimitrios Daskalakis
Nikolaos Gkalelis
Vasileios Mezaris
32
0
0
24 Aug 2023
Improving Policy Learning via Language Dynamics Distillation
Victor Zhong
Jesse Mu
Luke Zettlemoyer
Edward Grefenstette
Tim Rocktaschel
OffRL
32
15
0
30 Sep 2022
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,412
0
11 Nov 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
298
5,761
0
29 Apr 2021
Decoupling Representation Learning from Reinforcement Learning
Adam Stooke
Kimin Lee
Pieter Abbeel
Michael Laskin
SSL
DRL
278
339
0
14 Sep 2020
Improved Baselines with Momentum Contrastive Learning
Xinlei Chen
Haoqi Fan
Ross B. Girshick
Kaiming He
SSL
238
3,359
0
09 Mar 2020
1