Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.06786
Cited By
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
15 August 2020
Ben Adlam
Jeffrey Pennington
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization"
28 / 28 papers shown
Title
auto-fpt: Automating Free Probability Theory Calculations for Machine Learning Theory
Arjun Subramonian
Elvis Dohmatob
24
0
0
14 Apr 2025
Gradient Descent Robustly Learns the Intrinsic Dimension of Data in Training Convolutional Neural Networks
Chenyang Zhang
Peifeng Gao
Difan Zou
Yuan Cao
OOD
MLT
59
0
0
11 Apr 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
C. Pehlevan
AI4CE
61
1
0
04 Feb 2025
High dimensional analysis reveals conservative sharpening and a stochastic edge of stability
Atish Agarwala
Jeffrey Pennington
41
3
0
30 Apr 2024
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
Yufan Li
Subhabrata Sen
Ben Adlam
MLT
43
1
0
18 Apr 2024
A Theory of Non-Linear Feature Learning with One Gradient Step in Two-Layer Neural Networks
Behrad Moniri
Donghwan Lee
Hamed Hassani
Edgar Dobriban
MLT
34
19
0
11 Oct 2023
Fundamental Limits of Deep Learning-Based Binary Classifiers Trained with Hinge Loss
T. Getu
Georges Kaddoum
M. Bennis
34
1
0
13 Sep 2023
How Spurious Features Are Memorized: Precise Analysis for Random and NTK Features
Simone Bombari
Marco Mondelli
AAML
19
4
0
20 May 2023
Subsample Ridge Ensembles: Equivalences and Generalized Cross-Validation
Jin-Hong Du
Pratik V. Patil
Arun K. Kuchibhotla
16
11
0
25 Apr 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
C. Pehlevan
MLT
38
29
0
06 Apr 2023
Beyond the Universal Law of Robustness: Sharper Laws for Random Features and Neural Tangent Kernels
Simone Bombari
Shayan Kiyani
Marco Mondelli
AAML
28
10
0
03 Feb 2023
Demystifying Disagreement-on-the-Line in High Dimensions
Dong-Hwan Lee
Behrad Moniri
Xinmeng Huang
Edgar Dobriban
Hamed Hassani
21
8
0
31 Jan 2023
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
Second-order regression models exhibit progressive sharpening to the edge of stability
Atish Agarwala
Fabian Pedregosa
Jeffrey Pennington
25
26
0
10 Oct 2022
The BUTTER Zone: An Empirical Study of Training Dynamics in Fully Connected Neural Networks
Charles Edison Tripp
J. Perr-Sauer
L. Hayne
M. Lunacek
Jamil Gafur
AI4CE
21
0
0
25 Jul 2022
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
25
11
0
03 Jun 2022
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
51
15
0
13 May 2022
High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation
Jimmy Ba
Murat A. Erdogdu
Taiji Suzuki
Zhichao Wang
Denny Wu
Greg Yang
MLT
31
121
0
03 May 2022
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
C. Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
A generalization gap estimation for overparameterized models via the Langevin functional variance
Akifumi Okuno
Keisuke Yano
30
1
0
07 Dec 2021
Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model
A. Bodin
N. Macris
29
13
0
22 Oct 2021
Dataset Distillation with Infinitely Wide Convolutional Networks
Timothy Nguyen
Roman Novak
Lechao Xiao
Jaehoon Lee
DD
30
229
0
27 Jul 2021
Random Neural Networks in the Infinite Width Limit as Gaussian Processes
Boris Hanin
BDL
24
43
0
04 Jul 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
A. Gretton
MLT
33
35
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
22
9
0
04 Jun 2021
Appearance of Random Matrix Theory in Deep Learning
Nicholas P. Baskerville
Diego Granziol
J. Keating
13
11
0
12 Feb 2021
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
29
92
0
04 Nov 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
29
72
0
25 May 2020
1