Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2407.03475
Cited By
How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks
3 July 2024
Etai Littwin
Omid Saremi
Madhu Advani
Vimal Thilak
Preetum Nakkiran
Chen Huang
Joshua Susskind
Re-assign community
ArXiv (abs)
PDF
HTML
Github
Papers citing
"How JEPA Avoids Noisy Features: The Implicit Bias of Deep Linear Self Distillation Networks"
8 / 8 papers shown
Generalized Event Partonomy Inference with Structured Hierarchical Predictive Learning
Zhou Chen
Joe Lin
Sathyanarayanan N. Aakur
106
0
0
03 Dec 2025
Gaussian Embeddings: How JEPAs Secretly Learn Your Data Density
Randall Balestriero
Nicolas Ballas
Mike Rabbat
Yann LeCun
DRL
288
6
0
07 Oct 2025
Rethinking JEPA: Compute-Efficient Video SSL with Frozen Teachers
Xianhang Li
Chen Huang
Chun-Liang Li
Eran Malach
J. Susskind
Vimal Thilak
Etai Littwin
202
5
0
29 Sep 2025
LLM-JEPA: Large Language Models Meet Joint Embedding Predictive Architectures
Hai Huang
Yann LeCun
Randall Balestriero
237
13
0
11 Sep 2025
From Linearity to Non-Linearity: How Masked Autoencoders Capture Spatial Correlations
Anthony Bisulco
Rahul Ramesh
Randall Balestriero
Pratik Chaudhari
158
1
0
21 Aug 2025
Dual Perspectives on Non-Contrastive Self-Supervised Learning
Jean Ponce
Basile Terver
M. Hebert
Michael Arbel
SSL
220
2
0
18 Jun 2025
Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives
Dilermando Queiroz
Anderson Carlos
André Anjos
Lilian Berton
367
5
0
24 Feb 2025
Wearable Accelerometer Foundation Models for Health via Knowledge Distillation
Salar Abbaspourazad
Anshuman Mishra
Joseph D. Futoma
Andrew C. Miller
Ian Shapiro
541
9
0
15 Dec 2024
1
Page 1 of 1