ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.14623
  4. Cited By
From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
v1v2 (latest)

From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks

International Conference on Learning Representations (ICLR), 2024
22 September 2024
Clémentine Dominé
Nicolas Anguita
A. Proca
Lukas Braun
D. Kunin
P. Mediano
Andrew M. Saxe
ArXiv (abs)PDFHTML

Papers citing "From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks"

16 / 66 papers shown
A mathematical theory of semantic development in deep neural networks
A mathematical theory of semantic development in deep neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
186
297
0
23 Oct 2018
A Convergence Analysis of Gradient Descent for Deep Linear Neural
  Networks
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks
Sanjeev Arora
Nadav Cohen
Noah Golowich
Wei Hu
467
326
0
04 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLTODL
691
1,338
0
04 Oct 2018
An analytic theory of generalization dynamics and transfer learning in
  deep linear networks
An analytic theory of generalization dynamics and transfer learning in deep linear networksInternational Conference on Learning Representations (ICLR), 2018
Andrew Kyle Lampinen
Surya Ganguli
OOD
186
139
0
27 Sep 2018
Theory IIIb: Generalization in Deep Networks
Theory IIIb: Generalization in Deep Networks
T. Poggio
Q. Liao
Alycia Lee
Andrzej Banburski
Xavier Boix
Jack Hidary
ODLAI4CE
136
26
0
29 Jun 2018
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
Arthur Jacot
Franck Gabriel
Clément Hongler
2.7K
3,667
0
20 Jun 2018
Insights on representational similarity in neural networks with
  canonical correlation
Insights on representational similarity in neural networks with canonical correlation
Ari S. Morcos
M. Raghu
Samy Bengio
DRL
385
482
0
14 Jun 2018
On the Global Convergence of Gradient Descent for Over-parameterized
  Models using Optimal Transport
On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Lénaïc Chizat
Francis R. Bach
OT
403
797
0
24 May 2018
Trainability and Accuracy of Neural Networks: An Interacting Particle
  System Approach
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
Grant M. Rotskoff
Eric Vanden-Eijnden
334
140
0
02 May 2018
A Mean Field View of the Landscape of Two-Layers Neural Networks
A Mean Field View of the Landscape of Two-Layers Neural Networks
Song Mei
Andrea Montanari
Phan-Minh Nguyen
MLT
391
924
0
18 Apr 2018
Continual Lifelong Learning with Neural Networks: A Review
Continual Lifelong Learning with Neural Networks: A Review
G. I. Parisi
Ronald Kemker
Jose L. Part
Christopher Kanan
S. Wermter
KELMCLL
671
3,229
1
21 Feb 2018
On the Optimization of Deep Networks: Implicit Acceleration by
  Overparameterization
On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Sanjeev Arora
Nadav Cohen
Elad Hazan
268
527
0
19 Feb 2018
Resurrecting the sigmoid in deep learning through dynamical isometry:
  theory and practice
Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice
Jeffrey Pennington
S. Schoenholz
Surya Ganguli
ODL
250
273
0
13 Nov 2017
Overcoming catastrophic forgetting in neural networks
Overcoming catastrophic forgetting in neural networks
J. Kirkpatrick
Razvan Pascanu
Neil C. Rabinowitz
J. Veness
Guillaume Desjardins
...
A. Grabska-Barwinska
Demis Hassabis
Claudia Clopath
D. Kumaran
R. Hadsell
CLL
1.2K
8,812
1
02 Dec 2016
All you need is a good init
All you need is a good init
Dmytro Mishkin
Jirí Matas
ODL
838
637
0
19 Nov 2015
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networksInternational Conference on Learning Representations (ICLR), 2013
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
1.0K
1,969
0
20 Dec 2013
Previous
12