ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.08673
  4. Cited By
When can we Approximate Wide Contrastive Models with Neural Tangent
  Kernels and Principal Component Analysis?

When can we Approximate Wide Contrastive Models with Neural Tangent Kernels and Principal Component Analysis?

13 March 2024
Gautham Govind Anil
P. Esser
D. Ghoshdastidar
ArXivPDFHTML

Papers citing "When can we Approximate Wide Contrastive Models with Neural Tangent Kernels and Principal Component Analysis?"

6 / 6 papers shown
Title
Infinite Width Limits of Self Supervised Neural Networks
Maximilian Fleissner
Gautham Govind Anil
D. Ghoshdastidar
SSL
115
0
0
17 Nov 2024
Towards a Unified Theoretical Understanding of Non-contrastive Learning
  via Rank Differential Mechanism
Towards a Unified Theoretical Understanding of Non-contrastive Learning via Rank Differential Mechanism
Zhijian Zhuo
Yifei Wang
Jinwen Ma
Yisen Wang
41
24
0
04 Mar 2023
What shapes the loss landscape of self-supervised learning?
What shapes the loss landscape of self-supervised learning?
Liu Ziyin
Ekdeep Singh Lubana
Masakuni Ueda
Hidenori Tanaka
48
20
0
02 Oct 2022
Joint Embedding Self-Supervised Learning in the Kernel Regime
Joint Embedding Self-Supervised Learning in the Kernel Regime
B. Kiani
Randall Balestriero
Yubei Chen
S. Lloyd
Yann LeCun
SSL
24
13
0
29 Sep 2022
Understanding Deep Contrastive Learning via Coordinate-wise Optimization
Understanding Deep Contrastive Learning via Coordinate-wise Optimization
Yuandong Tian
52
34
0
29 Jan 2022
Understanding self-supervised Learning Dynamics without Contrastive
  Pairs
Understanding self-supervised Learning Dynamics without Contrastive Pairs
Yuandong Tian
Xinlei Chen
Surya Ganguli
SSL
135
278
0
12 Feb 2021
1