Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.03475
Cited By
How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks
3 July 2024
Etai Littwin
Omid Saremi
Madhu Advani
Vimal Thilak
Preetum Nakkiran
Chen Huang
Joshua Susskind
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks"
6 / 6 papers shown
Title
Wearable Accelerometer Foundation Models for Health via Knowledge Distillation
Salar Abbaspourazad
Anshuman Mishra
Joseph D. Futoma
Andrew C. Miller
Ian Shapiro
83
0
0
15 Dec 2024
Revisiting Feature Prediction for Learning Visual Representations from Video
Adrien Bardes
Q. Garrido
Jean Ponce
Xinlei Chen
Michael G. Rabbat
Yann LeCun
Mahmoud Assran
Nicolas Ballas
MDE
VLM
70
70
0
15 Feb 2024
WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization
Ali Saheb Pasand
Reza Moravej
Mahdi Biparva
Ali Ghodsi
19
2
0
14 Feb 2024
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
283
5,723
0
29 Apr 2021
Understanding self-supervised Learning Dynamics without Contrastive Pairs
Yuandong Tian
Xinlei Chen
Surya Ganguli
SSL
132
278
0
12 Feb 2021
1