Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2206.03688
Cited By
Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials
8 June 2022
Eshaan Nichani
Yunzhi Bai
Jason D. Lee
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Identifying good directions to escape the NTK regime and efficiently learn low-degree plus sparse polynomials"
8 / 8 papers shown
Title
Grokking as the Transition from Lazy to Rich Training Dynamics
Tanishq Kumar
Blake Bordelon
Samuel Gershman
C. Pehlevan
33
31
0
09 Oct 2023
SGD Finds then Tunes Features in Two-Layer Neural Networks with near-Optimal Sample Complexity: A Case Study in the XOR problem
Margalit Glasgow
MLT
74
13
0
26 Sep 2023
What can a Single Attention Layer Learn? A Study Through the Random Features Lens
Hengyu Fu
Tianyu Guo
Yu Bai
Song Mei
MLT
27
22
0
21 Jul 2023
Tight conditions for when the NTK approximation is valid
Enric Boix-Adserà
Etai Littwin
30
0
0
22 May 2023
Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks
Eshaan Nichani
Alexandru Damian
Jason D. Lee
MLT
36
13
0
11 May 2023
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Emmanuel Abbe
Enric Boix-Adserà
Theodor Misiakiewicz
FedML
MLT
79
72
0
21 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
164
67
0
27 Oct 2022
Transformers Learn Shortcuts to Automata
Bingbin Liu
Jordan T. Ash
Surbhi Goel
A. Krishnamurthy
Cyril Zhang
OffRL
LRM
32
155
0
19 Oct 2022
1