Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1908.10292
Cited By
v1
v2 (latest)
On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels
27 August 2019
Tengyuan Liang
Alexander Rakhlin
Xiyu Zhai
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels"
21 / 21 papers shown
Spectral Analysis of the Neural Tangent Kernel for Deep Residual Networks
Journal of machine learning research (JMLR), 2021
Yuval Belfer
Amnon Geifman
Meirav Galun
Ronen Basri
222
24
0
07 Apr 2021
Exact Gap between Generalization Error and Uniform Convergence in Random Feature Models
International Conference on Machine Learning (ICML), 2021
Zitong Yang
Yu Bai
Song Mei
223
19
0
08 Mar 2021
Learning with invariances in random features and kernel models
Annual Conference Computational Learning Theory (COLT), 2021
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
308
100
0
25 Feb 2021
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and Regularization
SIAM Journal on Mathematics of Data Science (SIMODS), 2020
Ke Wang
Christos Thrampoulidis
562
34
0
18 Nov 2020
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
554
99
0
30 Sep 2020
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
457
206
0
29 Sep 2020
For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability
Akshay Rangamani
Lorenzo Rosasco
T. Poggio
209
0
0
28 Jun 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
Ernesto De Vito
Lorenzo Rosasco
419
8
0
17 Jun 2020
Triple descent and the two kinds of overfitting: Where & why do they appear?
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
346
84
0
05 Jun 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
270
89
0
25 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
569
200
0
23 Apr 2020
Optimal Regularization Can Mitigate Double Descent
International Conference on Learning Representations (ICLR), 2020
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
448
148
0
04 Mar 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-
ℓ
1
\ell_1
ℓ
1
-Norm Interpolated Classifiers
Social Science Research Network (SSRN), 2020
Tengyuan Liang
Pragya Sur
513
73
0
05 Feb 2020
A Deep Conditioning Treatment of Neural Networks
International Conference on Algorithmic Learning Theory (ALT), 2020
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
455
19
0
04 Feb 2020
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
254
72
0
16 Dec 2019
The Generalization Error of the Minimum-norm Solutions for Over-parameterized Neural Networks
E. Weinan
Chao Ma
Lei Wu
259
14
0
15 Dec 2019
A Constructive Prediction of the Generalization Error Across Scales
International Conference on Learning Representations (ICLR), 2019
Jonathan S. Rosenfeld
Amir Rosenfeld
Yonatan Belinkov
Nir Shavit
425
264
0
27 Sep 2019
Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently
Rong Ge
Runzhe Wang
Haoyu Zhao
TDI
242
20
0
26 Sep 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
Proceedings of the National Academy of Sciences of the United States of America (PNAS), 2019
T. Poggio
Andrzej Banburski
Q. Liao
ODL
251
200
0
25 Aug 2019
Linearized two-layers neural networks in high dimension
Annals of Statistics (Ann. Stat.), 2019
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
436
263
0
27 Apr 2019
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
497
129
0
17 Oct 2018
1
Page 1 of 1