ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.12826
  4. Cited By
The Interpolation Phase Transition in Neural Networks: Memorization and
  Generalization under Lazy Training

The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training

25 July 2020
Andrea Montanari
Yiqiao Zhong
ArXivPDFHTML

Papers citing "The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy Training"

24 / 74 papers shown
Title
A Framework for Overparameterized Learning
A Framework for Overparameterized Learning
Dávid Terjék
Diego González-Sánchez
MLT
11
1
0
26 May 2022
Quadratic models for understanding catapult dynamics of neural networks
Quadratic models for understanding catapult dynamics of neural networks
Libin Zhu
Chaoyue Liu
Adityanarayanan Radhakrishnan
M. Belkin
22
13
0
24 May 2022
Transition to Linearity of General Neural Networks with Directed Acyclic
  Graph Architecture
Transition to Linearity of General Neural Networks with Directed Acyclic Graph Architecture
Libin Zhu
Chaoyue Liu
M. Belkin
GNN
AI4CE
15
4
0
24 May 2022
Memorization and Optimization in Deep Neural Networks with Minimum
  Over-parameterization
Memorization and Optimization in Deep Neural Networks with Minimum Over-parameterization
Simone Bombari
Mohammad Hossein Amani
Marco Mondelli
20
26
0
20 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step
  Improves the Representation
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
29
121
0
03 May 2022
Adversarial Examples in Random Neural Networks with General Activations
Adversarial Examples in Random Neural Networks with General Activations
Andrea Montanari
Yuchen Wu
GAN
AAML
74
13
0
31 Mar 2022
An Empirical Study of Memorization in NLP
An Empirical Study of Memorization in NLP
Xiaosen Zheng
Jing Jiang
TDI
17
24
1
23 Mar 2022
On the (Non-)Robustness of Two-Layer Neural Networks in Different
  Learning Regimes
On the (Non-)Robustness of Two-Layer Neural Networks in Different Learning Regimes
Elvis Dohmatob
A. Bietti
AAML
21
13
0
22 Mar 2022
Universality of empirical risk minimization
Universality of empirical risk minimization
Andrea Montanari
Basil Saeed
OOD
25
73
0
17 Feb 2022
Benign Overfitting in Two-layer Convolutional Neural Networks
Benign Overfitting in Two-layer Convolutional Neural Networks
Yuan Cao
Zixiang Chen
M. Belkin
Quanquan Gu
MLT
16
82
0
14 Feb 2022
Benign Overfitting without Linearity: Neural Network Classifiers Trained
  by Gradient Descent for Noisy Linear Data
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
26
69
0
11 Feb 2022
Deformed semicircle law and concentration of nonlinear random matrices
  for ultra-wide neural networks
Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
Zhichao Wang
Yizhe Zhu
23
18
0
20 Sep 2021
Deep Networks Provably Classify Data on Curves
Deep Networks Provably Classify Data on Curves
Tingran Wang
Sam Buchanan
D. Gilboa
John N. Wright
23
9
0
29 Jul 2021
Nonasymptotic theory for two-layer neural networks: Beyond the
  bias-variance trade-off
Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off
Huiyuan Wang
Wei Lin
MLT
22
4
0
09 Jun 2021
Fundamental tradeoffs between memorization and robustness in random
  features and neural tangent regimes
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
17
9
0
04 Jun 2021
A Geometric Analysis of Neural Collapse with Unconstrained Features
A Geometric Analysis of Neural Collapse with Unconstrained Features
Zhihui Zhu
Tianyu Ding
Jinxin Zhou
Xiao Li
Chong You
Jeremias Sulam
Qing Qu
16
195
0
06 May 2021
Risk Bounds for Over-parameterized Maximum Margin Classification on
  Sub-Gaussian Mixtures
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian Mixtures
Yuan Cao
Quanquan Gu
M. Belkin
4
51
0
28 Apr 2021
A Recipe for Global Convergence Guarantee in Deep Neural Networks
A Recipe for Global Convergence Guarantee in Deep Neural Networks
Kenji Kawaguchi
Qingyun Sun
14
11
0
12 Apr 2021
When Are Solutions Connected in Deep Networks?
When Are Solutions Connected in Deep Networks?
Quynh N. Nguyen
Pierre Bréchet
Marco Mondelli
17
9
0
18 Feb 2021
On the Theory of Implicit Deep Learning: Global Convergence with
  Implicit Layers
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
Kenji Kawaguchi
PINN
20
41
0
15 Feb 2021
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for
  Deep ReLU Networks
Tight Bounds on the Smallest Eigenvalue of the Neural Tangent Kernel for Deep ReLU Networks
Quynh N. Nguyen
Marco Mondelli
Guido Montúfar
20
81
0
21 Dec 2020
Benign overfitting in ridge regression
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
13
159
0
29 Sep 2020
Deep Networks and the Multiple Manifold Problem
Deep Networks and the Multiple Manifold Problem
Sam Buchanan
D. Gilboa
John N. Wright
166
39
0
25 Aug 2020
Large-time asymptotics in deep learning
Large-time asymptotics in deep learning
Carlos Esteve
Borjan Geshkovski
Dario Pighin
Enrique Zuazua
6
34
0
06 Aug 2020
Previous
12