ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.07571
  4. Cited By
Two models of double descent for weak features
v1v2 (latest)

Two models of double descent for weak features

SIAM Journal on Mathematics of Data Science (SIMODS), 2019
18 March 2019
M. Belkin
Daniel J. Hsu
Ji Xu
ArXiv (abs)PDFHTML

Papers citing "Two models of double descent for weak features"

50 / 269 papers shown
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
Early Stopping in Deep Networks: Double Descent and How to Eliminate it
Reinhard Heckel
Fatih Yilmaz
264
55
0
20 Jul 2020
Prediction in latent factor regression: Adaptive PCR and beyond
Prediction in latent factor regression: Adaptive PCR and beyond
Xin Bing
F. Bunea
Seth Strimas-Mackey
M. Wegkamp
237
2
0
20 Jul 2020
Large scale analysis of generalization error in learning using margin
  based classification methods
Large scale analysis of generalization error in learning using margin based classification methodsJournal of Statistical Mechanics: Theory and Experiment (JSTAT), 2020
Hanwen Huang
Qinglong Yang
149
9
0
16 Jul 2020
Understanding Implicit Regularization in Over-Parameterized Single Index
  Model
Understanding Implicit Regularization in Over-Parameterized Single Index ModelJournal of the American Statistical Association (JASA), 2020
Jianqing Fan
Zhuoran Yang
Mengxin Yu
316
22
0
16 Jul 2020
How benign is benign overfitting?
How benign is benign overfitting?International Conference on Learning Representations (ICLR), 2020
Amartya Sanyal
P. Dokania
Varun Kanade
Juil Sock
NoLaAAML
178
59
0
08 Jul 2020
Exploring Weight Importance and Hessian Bias in Model Pruning
Exploring Weight Importance and Hessian Bias in Model Pruning
Mingchen Li
Yahya Sattar
Christos Thrampoulidis
Samet Oymak
228
4
0
19 Jun 2020
On Sparsity in Overparametrised Shallow ReLU Networks
On Sparsity in Overparametrised Shallow ReLU Networks
Jaume de Dios
Joan Bruna
143
14
0
18 Jun 2020
Revisiting minimum description length complexity in overparameterized
  models
Revisiting minimum description length complexity in overparameterized models
Raaz Dwivedi
Chandan Singh
Bin Yu
Martin J. Wainwright
428
5
0
17 Jun 2020
Interpolation and Learning with Scale Dependent Kernels
Nicolò Pagliana
Alessandro Rudi
Ernesto De Vito
Lorenzo Rosasco
293
8
0
17 Jun 2020
Overparameterization and generalization error: weighted trigonometric
  interpolation
Overparameterization and generalization error: weighted trigonometric interpolation
Yuege Xie
H. Chou
Holger Rauhut
Rachel A. Ward
115
3
0
15 Jun 2020
Double Double Descent: On Generalization Errors in Transfer Learning
  between Linear Regression Tasks
Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression TasksSIAM Journal on Mathematics of Data Science (SIMODS), 2020
Yehuda Dar
Richard G. Baraniuk
509
20
0
12 Jun 2020
Generalization error in high-dimensional perceptrons: Approaching Bayes
  error with convex optimization
Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimizationNeural Information Processing Systems (NeurIPS), 2020
Benjamin Aubin
Florent Krzakala
Yue M. Lu
Lenka Zdeborová
259
62
0
11 Jun 2020
Asymptotics of Ridge (less) Regression under General Source Condition
Asymptotics of Ridge (less) Regression under General Source ConditionInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Dominic Richards
Jaouad Mourtada
Lorenzo Rosasco
258
84
0
11 Jun 2020
On Uniform Convergence and Low-Norm Interpolation Learning
On Uniform Convergence and Low-Norm Interpolation LearningNeural Information Processing Systems (NeurIPS), 2020
Lijia Zhou
Danica J. Sutherland
Nathan Srebro
238
31
0
10 Jun 2020
On the Optimal Weighted $\ell_2$ Regularization in Overparameterized
  Linear Regression
On the Optimal Weighted ℓ2\ell_2ℓ2​ Regularization in Overparameterized Linear RegressionNeural Information Processing Systems (NeurIPS), 2020
Denny Wu
Ji Xu
346
141
0
10 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
346
114
0
05 Jun 2020
Model Repair: Robust Recovery of Over-Parameterized Statistical Models
Model Repair: Robust Recovery of Over-Parameterized Statistical Models
Chao Gao
John D. Lafferty
172
7
0
20 May 2020
Classification vs regression in overparameterized regimes: Does the loss
  function matter?
Classification vs regression in overparameterized regimes: Does the loss function matter?
Vidya Muthukumar
Adhyyan Narang
Vignesh Subramanian
M. Belkin
Daniel J. Hsu
A. Sahai
290
165
0
16 May 2020
Generalization Error of Generalized Linear Models in High Dimensions
Generalization Error of Generalized Linear Models in High DimensionsInternational Conference on Machine Learning (ICML), 2020
M Motavali Emami
Mojtaba Sahraee-Ardakan
Parthe Pandit
S. Rangan
A. Fletcher
AI4CE
127
40
0
01 May 2020
Generalization Error for Linear Regression under Distributed Learning
Generalization Error for Linear Regression under Distributed LearningInternational Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2020
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
FedML
108
6
0
30 Apr 2020
Finite-sample Analysis of Interpolating Linear Classifiers in the
  Overparameterized Regime
Finite-sample Analysis of Interpolating Linear Classifiers in the Overparameterized RegimeJournal of machine learning research (JMLR), 2020
Niladri S. Chatterji
Philip M. Long
237
114
0
25 Apr 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
487
189
0
23 Apr 2020
Mehler's Formula, Branching Process, and Compositional Kernels of Deep
  Neural Networks
Mehler's Formula, Branching Process, and Compositional Kernels of Deep Neural NetworksJournal of the American Statistical Association (JASA), 2020
Tengyuan Liang
Hai Tran-Bach
168
11
0
09 Apr 2020
On the robustness of the minimum $\ell_2$ interpolator
On the robustness of the minimum ℓ2\ell_2ℓ2​ interpolator
Geoffrey Chinot
M. Lerasle
165
10
0
12 Mar 2020
Rethinking Parameter Counting in Deep Models: Effective Dimensionality
  Revisited
Rethinking Parameter Counting in Deep Models: Effective Dimensionality Revisited
Wesley J. Maddox
Gregory W. Benton
A. Wilson
268
66
0
04 Mar 2020
Optimal Regularization Can Mitigate Double Descent
Optimal Regularization Can Mitigate Double DescentInternational Conference on Learning Representations (ICLR), 2020
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
264
147
0
04 Mar 2020
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy RegimeInternational Conference on Machine Learning (ICML), 2020
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
437
159
0
02 Mar 2020
The Curious Case of Adversarially Robust Models: More Data Can Help,
  Double Descend, or Hurt Generalization
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt GeneralizationConference on Uncertainty in Artificial Intelligence (UAI), 2020
Yifei Min
Lin Chen
Amin Karbasi
AAML
262
72
0
25 Feb 2020
Subspace Fitting Meets Regression: The Effects of Supervision and
  Orthonormality Constraints on Double Descent of Generalization Errors
Subspace Fitting Meets Regression: The Effects of Supervision and Orthonormality Constraints on Double Descent of Generalization ErrorsInternational Conference on Machine Learning (ICML), 2020
Yehuda Dar
Paul Mayer
Lorenzo Luzi
Richard G. Baraniuk
254
17
0
25 Feb 2020
Improved guarantees and a multiple-descent curve for Column Subset
  Selection and the Nyström method
Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nyström method
Michal Derezinski
Rajiv Khanna
Michael W. Mahoney
302
10
0
21 Feb 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature ModelsInternational Conference on Machine Learning (ICML), 2020
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
278
84
0
19 Feb 2020
Self-explaining AI as an alternative to interpretable AI
Self-explaining AI as an alternative to interpretable AIArtificial General Intelligence (AGI), 2020
Daniel C. Elton
498
63
0
12 Feb 2020
Sparse Recovery With Non-Linear Fourier Features
Sparse Recovery With Non-Linear Fourier FeaturesIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Ayça Özçelikkale
143
5
0
12 Feb 2020
Characterizing Structural Regularities of Labeled Data in
  Overparameterized Models
Characterizing Structural Regularities of Labeled Data in Overparameterized ModelsInternational Conference on Machine Learning (ICML), 2020
Ziheng Jiang
Chiyuan Zhang
Kunal Talwar
Michael C. Mozer
TDI
344
120
0
08 Feb 2020
Interpolating Predictors in High-Dimensional Factor Regression
Interpolating Predictors in High-Dimensional Factor Regression
F. Bunea
Seth Strimas-Mackey
M. Wegkamp
285
12
0
06 Feb 2020
A Precise High-Dimensional Asymptotic Theory for Boosting and
  Minimum-$\ell_1$-Norm Interpolated Classifiers
A Precise High-Dimensional Asymptotic Theory for Boosting and Minimum-ℓ1\ell_1ℓ1​-Norm Interpolated ClassifiersSocial Science Research Network (SSRN), 2020
Tengyuan Liang
Pragya Sur
399
71
0
05 Feb 2020
A Deep Conditioning Treatment of Neural Networks
A Deep Conditioning Treatment of Neural NetworksInternational Conference on Algorithmic Learning Theory (ALT), 2020
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
363
18
0
04 Feb 2020
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
Overfitting Can Be Harmless for Basis Pursuit, But Only to a Degree
Peizhong Ju
Xiaojun Lin
Jia Liu
275
7
0
02 Feb 2020
Analytic Study of Double Descent in Binary Classification: The Impact of
  Loss
Analytic Study of Double Descent in Binary Classification: The Impact of LossInternational Symposium on Information Theory (ISIT), 2020
Ganesh Ramachandra Kini
Christos Thrampoulidis
192
56
0
30 Jan 2020
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide
  Random Network: A Geometrical Perspective
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical PerspectiveNeural Computation (Neural Comput.), 2020
S. Amari
158
14
0
20 Jan 2020
Risk of the Least Squares Minimum Norm Estimator under the Spike
  Covariance Model
Risk of the Least Squares Minimum Norm Estimator under the Spike Covariance Model
Yasaman Mahdaviyeh
Zacharie Naulet
155
5
0
31 Dec 2019
On the Bias-Variance Tradeoff: Textbooks Need an Update
On the Bias-Variance Tradeoff: Textbooks Need an Update
Brady Neal
104
20
0
17 Dec 2019
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
Preetum Nakkiran
162
72
0
16 Dec 2019
Double descent in the condition number
Double descent in the condition number
T. Poggio
Gil Kur
Andy Banburski
164
28
0
12 Dec 2019
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random designNeural Information Processing Systems (NeurIPS), 2019
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
347
79
0
10 Dec 2019
Deep Double Descent: Where Bigger Models and More Data Hurt
Deep Double Descent: Where Bigger Models and More Data HurtInternational Conference on Learning Representations (ICLR), 2019
Preetum Nakkiran
Gal Kaplun
Yamini Bansal
Tristan Yang
Boaz Barak
Ilya Sutskever
391
1,053
0
04 Dec 2019
How Much Over-parameterization Is Sufficient to Learn Deep ReLU
  Networks?
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?International Conference on Learning Representations (ICLR), 2019
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
327
129
0
27 Nov 2019
Implicit Regularization and Convergence for Weight Normalization
Implicit Regularization and Convergence for Weight NormalizationNeural Information Processing Systems (NeurIPS), 2019
Xiaoxia Wu
Guang Cheng
Zhaolin Ren
Shanshan Wu
Zhiyuan Li
Suriya Gunasekar
Rachel A. Ward
Qiang Liu
497
26
0
18 Nov 2019
A Model of Double Descent for High-dimensional Binary Linear
  Classification
A Model of Double Descent for High-dimensional Binary Linear ClassificationInformation and Inference A Journal of the IMA (JIII), 2019
Zeyu Deng
A. Kammoun
Christos Thrampoulidis
326
154
0
13 Nov 2019
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The
  Multivariate Case
A Function Space View of Bounded Norm Infinite Width ReLU Nets: The Multivariate CaseInternational Conference on Learning Representations (ICLR), 2019
Greg Ongie
Rebecca Willett
Daniel Soudry
Nathan Srebro
204
170
0
03 Oct 2019
Previous
123456
Next