ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.07571
  4. Cited By
Two models of double descent for weak features

Two models of double descent for weak features

18 March 2019
M. Belkin
Daniel J. Hsu
Ji Xu
ArXivPDFHTML

Papers citing "Two models of double descent for weak features"

50 / 262 papers shown
Title
Revisiting minimum description length complexity in overparameterized
  models
Revisiting minimum description length complexity in overparameterized models
Raaz Dwivedi
Chandan Singh
Bin Yu
Martin J. Wainwright
6
4
0
17 Jun 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
E. De Vito
Lorenzo Rosasco
36
8
0
17 Jun 2020
Overparameterization and generalization error: weighted trigonometric
  interpolation
Overparameterization and generalization error: weighted trigonometric interpolation
Yuege Xie
H. Chou
Holger Rauhut
Rachel A. Ward
4
3
0
15 Jun 2020
Double Double Descent: On Generalization Errors in Transfer Learning
  between Linear Regression Tasks
Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks
Yehuda Dar
Richard G. Baraniuk
28
19
0
12 Jun 2020
Generalization error in high-dimensional perceptrons: Approaching Bayes
  error with convex optimization
Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization
Benjamin Aubin
Florent Krzakala
Yue M. Lu
Lenka Zdeborová
26
54
0
11 Jun 2020
Asymptotics of Ridge (less) Regression under General Source Condition
Asymptotics of Ridge (less) Regression under General Source Condition
Dominic Richards
Jaouad Mourtada
Lorenzo Rosasco
26
72
0
11 Jun 2020
On Uniform Convergence and Low-Norm Interpolation Learning
On Uniform Convergence and Low-Norm Interpolation Learning
Lijia Zhou
Danica J. Sutherland
Nathan Srebro
19
29
0
10 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear Regression
Denny Wu
Ji Xu
22
121
0
10 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
45
98
0
05 Jun 2020
Model Repair: Robust Recovery of Over-Parameterized Statistical Models
Model Repair: Robust Recovery of Over-Parameterized Statistical Models
Chao Gao
John D. Lafferty
15
6
0
20 May 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
41
148
0
16 May 2020
Generalization Error of Generalized Linear Models in High Dimensions
Generalization Error of Generalized Linear Models in High Dimensions
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
AI4CE
14
37
0
01 May 2020
Generalization Error for Linear Regression under Distributed Learning
Generalization Error for Linear Regression under Distributed Learning
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
FedML
11
6
0
30 Apr 2020
Finite-sample Analysis of Interpolating Linear Classifiers in the
  Overparameterized Regime
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized Regime
Niladri S. Chatterji
Philip M. Long
13
108
0
25 Apr 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
36
172
0
23 Apr 2020
Mehler's Formula, Branching Process, and Compositional Kernels of Deep
  Neural Networks
Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural Networks
Tengyuan Liang
Hai Tran-Bach
12
11
0
09 Apr 2020
On the robustness of the minimum $\ell_2$ interpolator
On the robustness of the minimum ℓ2\ell_2ℓ2​ interpolator
Geoffrey Chinot
M. Lerasle
28
10
0
12 Mar 2020
Rethinking Parameter Counting in Deep Models: Effective Dimensionality
  Revisited
Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Wesley J. Maddox
Gregory W. Benton
A. Wilson
12
61
0
04 Mar 2020
Optimal Regularization Can Mitigate Double Descent
Optimal Regularization Can Mitigate Double Descent
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
23
128
0
04 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
The Curious Case of Adversarially Robust Models: More Data Can Help,
  Double Descend, or Hurt Generalization
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization
Yifei Min
Lin Chen
Amin Karbasi
AAML
31
69
0
25 Feb 2020
Subspace Fitting Meets Regression: The Effects of Supervision and
  Orthonormality Constraints on Double Descent of Generalization Errors
Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization Errors
Yehuda Dar
Paul Mayer
Lorenzo Luzi
Richard G. Baraniuk
6
17
0
25 Feb 2020
Improved guarantees and a multiple-descent curve for Column Subset
  Selection and the Nyström method
Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nyström method
Michal Derezinski
Rajiv Khanna
Michael W. Mahoney
21
10
0
21 Feb 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
18
82
0
19 Feb 2020
Self-explaining AI as an alternative to interpretable AI
Self-explaining AI as an alternative to interpretable AI
Daniel C. Elton
6
56
0
12 Feb 2020
Sparse Recovery With Non-Linear Fourier Features
Sparse Recovery With Non-Linear Fourier Features
Ayça Özçelikkale
6
5
0
12 Feb 2020
Characterizing Structural Regularities of Labeled Data in
  Overparameterized Models
Characterizing Structural Regularities of Labeled Data in Overparameterized Models
Ziheng Jiang
Chiyuan Zhang
Kunal Talwar
Michael C. Mozer
TDI
17
97
0
08 Feb 2020
Interpolating Predictors in High-Dimensional Factor Regression
Interpolating Predictors in High-Dimensional Factor Regression
F. Bunea
Seth Strimas-Mackey
M. Wegkamp
9
12
0
06 Feb 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated Classifiers
Tengyuan Liang
Pragya Sur
30
68
0
05 Feb 2020
A Deep Conditioning Treatment of Neural Networks
A Deep Conditioning Treatment of Neural Networks
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
17
14
0
04 Feb 2020
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
Peizhong Ju
Xiaojun Lin
Jia Liu
77
7
0
02 Feb 2020
Analytic Study of Double Descent in Binary Classification: The Impact of
  Loss
Analytic Study of Double Descent in Binary Classification: The Impact of Loss
Ganesh Ramachandra Kini
Christos Thrampoulidis
15
52
0
30 Jan 2020
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide
  Random Network: A Geometrical Perspective
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical Perspective
S. Amari
19
12
0
20 Jan 2020
Risk of the Least Squares Minimum Norm Estimator under the Spike
  Covariance Model
Risk of the Least Squares Minimum Norm Estimator under the Spike Covariance Model
Yasaman Mahdaviyeh
Zacharie Naulet
19
4
0
31 Dec 2019
On the Bias-Variance Tradeoff: Textbooks Need an Update
On the Bias-Variance Tradeoff: Textbooks Need an Update
Brady Neal
18
18
0
17 Dec 2019
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
14
68
0
16 Dec 2019
Double descent in the condition number
Double descent in the condition number
T. Poggio
Gil Kur
Andy Banburski
17
27
0
12 Dec 2019
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
16
77
0
10 Dec 2019
Deep Double Descent: Where Bigger Models and More Data Hurt
Deep Double Descent: Where Bigger Models and More Data Hurt
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
20
915
0
04 Dec 2019
How Much Over-parameterization Is Sufficient to Learn Deep ReLU
  Networks?
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
14
122
0
27 Nov 2019
Implicit Regularization and Convergence for Weight Normalization
Implicit Regularization and Convergence for Weight Normalization
Xiaoxia Wu
Edgar Dobriban
Tongzheng Ren
Shanshan Wu
Zhiyuan Li
Suriya Gunasekar
Rachel A. Ward
Qiang Liu
20
21
0
18 Nov 2019
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear Classification
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
34
143
0
13 Nov 2019
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The
  Multivariate Case
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate Case
Greg Ongie
Rebecca Willett
Daniel Soudry
Nathan Srebro
13
160
0
03 Oct 2019
Finite Depth and Width Corrections to the Neural Tangent Kernel
Finite Depth and Width Corrections to the Neural Tangent Kernel
Boris Hanin
Mihai Nica
MDE
22
150
0
13 Sep 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
29
161
0
25 Aug 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
47
626
0
14 Aug 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
6
762
0
26 Jun 2019
Generalization Guarantees for Neural Networks via Harnessing the
  Low-rank Structure of the Jacobian
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
19
88
0
12 Jun 2019
Does Learning Require Memorization? A Short Tale about a Long Tail
Does Learning Require Memorization? A Short Tale about a Long Tail
Vitaly Feldman
TDI
21
481
0
12 Jun 2019
On the number of variables to use in principal component regression
On the number of variables to use in principal component regression
Ji Xu
Daniel J. Hsu
16
5
0
04 Jun 2019
Previous
123456
Next