ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.18270
  4. Cited By
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time

How Two-Layer Neural Networks Learn, One (Giant) Step at a Time

29 May 2023
Yatin Dandi
Florent Krzakala
Bruno Loureiro
Luca Pesce
Ludovic Stephan
    MLT
ArXivPDFHTML

Papers citing "How Two-Layer Neural Networks Learn, One (Giant) Step at a Time"

11 / 11 papers shown
Title
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
C. Pehlevan
AI4CE
59
1
0
04 Feb 2025
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Mean-Field Analysis for Learning Subspace-Sparse Polynomials with Gaussian Input
Ziang Chen
Rong Ge
MLT
53
1
0
10 Jan 2025
An Analysis Framework for Understanding Deep Neural Networks Based on Network Dynamics
An Analysis Framework for Understanding Deep Neural Networks Based on Network Dynamics
Yuchen Lin
Yong Zhang
Sihan Feng
Hong Zhao
26
0
0
05 Jan 2025
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
44
11
0
26 Sep 2024
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Edgar Dobriban
MLT
18
19
0
11 Oct 2023
SGD learning on neural networks: leap complexity and saddle-to-saddle
  dynamics
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Emmanuel Abbe
Enric Boix-Adserà
Theodor Misiakiewicz
FedML
MLT
76
72
0
21 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
150
65
0
27 Oct 2022
The Eigenlearning Framework: A Conservation Law Perspective on Kernel
  Regression and Wide Neural Networks
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks
James B. Simon
Madeline Dickens
Dhruva Karkada
M. DeWeese
34
26
0
08 Oct 2021
The large learning rate phase of deep learning: the catapult mechanism
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
150
232
0
04 Mar 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
131
199
0
07 Feb 2020
Trainability and Accuracy of Neural Networks: An Interacting Particle
  System Approach
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
Grant M. Rotskoff
Eric Vanden-Eijnden
56
114
0
02 May 2018
1