ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.10292
  4. Cited By
On the Multiple Descent of Minimum-Norm Interpolants and Restricted
  Lower Isometry of Kernels
v1v2 (latest)

On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels

27 August 2019
Tengyuan Liang
Alexander Rakhlin
Xiyu Zhai
ArXiv (abs)PDFHTML

Papers citing "On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels"

21 / 21 papers shown
Spectral Analysis of the Neural Tangent Kernel for Deep Residual
  Networks
Spectral Analysis of the Neural Tangent Kernel for Deep Residual NetworksJournal of machine learning research (JMLR), 2021
Yuval Belfer
Amnon Geifman
Meirav Galun
Ronen Basri
222
24
0
07 Apr 2021
Exact Gap between Generalization Error and Uniform Convergence in Random
  Feature Models
Exact Gap between Generalization Error and Uniform Convergence in Random Feature ModelsInternational Conference on Machine Learning (ICML), 2021
Zitong Yang
Yu Bai
Song Mei
223
19
0
08 Mar 2021
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel modelsAnnual Conference Computational Learning Theory (COLT), 2021
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
308
100
0
25 Feb 2021
Binary Classification of Gaussian Mixtures: Abundance of Support
  Vectors, Benign Overfitting and Regularization
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and RegularizationSIAM Journal on Mathematics of Data Science (SIMODS), 2020
Ke Wang
Christos Thrampoulidis
562
34
0
18 Nov 2020
Deep Equals Shallow for ReLU Networks in Kernel Regimes
Deep Equals Shallow for ReLU Networks in Kernel Regimes
A. Bietti
Francis R. Bach
554
99
0
30 Sep 2020
Benign overfitting in ridge regression
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
457
206
0
29 Sep 2020
For interpolating kernel machines, minimizing the norm of the ERM
  solution minimizes stability
For interpolating kernel machines, minimizing the norm of the ERM solution minimizes stability
Akshay Rangamani
Lorenzo Rosasco
T. Poggio
209
0
0
28 Jun 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
Ernesto De Vito
Lorenzo Rosasco
419
8
0
17 Jun 2020
Triple descent and the two kinds of overfitting: Where & why do they
  appear?
Triple descent and the two kinds of overfitting: Where & why do they appear?
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
346
84
0
05 Jun 2020
Spectra of the Conjugate Kernel and Neural Tangent Kernel for
  linear-width neural networks
Spectra of the Conjugate Kernel and Neural Tangent Kernel for linear-width neural networks
Z. Fan
Zhichao Wang
270
89
0
25 May 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
569
200
0
23 Apr 2020
Optimal Regularization Can Mitigate Double Descent
Optimal Regularization Can Mitigate Double DescentInternational Conference on Learning Representations (ICLR), 2020
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
448
148
0
04 Mar 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated ClassifiersSocial Science Research Network (SSRN), 2020
Tengyuan Liang
Pragya Sur
513
73
0
05 Feb 2020
A Deep Conditioning Treatment of Neural Networks
A Deep Conditioning Treatment of Neural NetworksInternational Conference on Algorithmic Learning Theory (ALT), 2020
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
455
19
0
04 Feb 2020
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
254
72
0
16 Dec 2019
The Generalization Error of the Minimum-norm Solutions for
  Over-parameterized Neural Networks
The Generalization Error of the Minimum-norm Solutions for Over-parameterized Neural Networks
E. Weinan
Chao Ma
Lei Wu
259
14
0
15 Dec 2019
A Constructive Prediction of the Generalization Error Across Scales
A Constructive Prediction of the Generalization Error Across ScalesInternational Conference on Learning Representations (ICLR), 2019
Jonathan S. Rosenfeld
Amir Rosenfeld
Yonatan Belinkov
Nir Shavit
425
264
0
27 Sep 2019
Mildly Overparametrized Neural Nets can Memorize Training Data
  Efficiently
Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently
Rong Ge
Runzhe Wang
Haoyu Zhao
TDI
242
20
0
26 Sep 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization
Theoretical Issues in Deep Networks: Approximation, Optimization and GeneralizationProceedings of the National Academy of Sciences of the United States of America (PNAS), 2019
T. Poggio
Andrzej Banburski
Q. Liao
ODL
251
200
0
25 Aug 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimensionAnnals of Statistics (Ann. Stat.), 2019
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
436
263
0
27 Apr 2019
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
497
129
0
17 Oct 2018
1
Page 1 of 1