ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.07571
  4. Cited By
Two models of double descent for weak features
v1v2 (latest)

Two models of double descent for weak features

SIAM Journal on Mathematics of Data Science (SIMODS), 2019
18 March 2019
M. Belkin
Daniel J. Hsu
Ji Xu
ArXiv (abs)PDFHTML

Papers citing "Two models of double descent for weak features"

50 / 269 papers shown
Double Descent Optimization Pattern and Aliasing: Caveats of Noisy
  Labels
Double Descent Optimization Pattern and Aliasing: Caveats of Noisy Labels
Florian Dubost
Erin Hong
Max Pike
Siddharth Sharma
Siyi Tang
Nandita Bhaskhar
Christopher Lee-Messer
D. Rubin
NoLa
254
2
0
03 Jun 2021
Optimization Variance: Exploring Generalization Properties of DNNs
Optimization Variance: Exploring Generalization Properties of DNNs
Xiao Zhang
Dongrui Wu
Haoyi Xiong
Bo Dai
150
5
0
03 Jun 2021
Generalization Error Rates in Kernel Regression: The Crossover from the
  Noiseless to Noisy Regime
Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy RegimeNeural Information Processing Systems (NeurIPS), 2021
Hugo Cui
Bruno Loureiro
Florent Krzakala
Lenka Zdeborová
254
94
0
31 May 2021
Fit without fear: remarkable mathematical phenomena of deep learning
  through the prism of interpolation
Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolationActa Numerica (AN), 2021
M. Belkin
189
207
0
29 May 2021
Support vector machines and linear regression coincide with very
  high-dimensional features
Support vector machines and linear regression coincide with very high-dimensional featuresNeural Information Processing Systems (NeurIPS), 2021
Navid Ardeshir
Clayton Sanford
Daniel J. Hsu
193
31
0
28 May 2021
Model Mismatch Trade-offs in LMMSE Estimation
Model Mismatch Trade-offs in LMMSE EstimationEuropean Signal Processing Conference (EUSIPCO), 2021
Martin Hellkvist
Ayça Özçelikkale
79
3
0
25 May 2021
A Precise Performance Analysis of Support Vector Regression
A Precise Performance Analysis of Support Vector RegressionInternational Conference on Machine Learning (ICML), 2021
Houssem Sifaou
A. Kammoun
Mohamed-Slim Alouini
133
7
0
21 May 2021
Risk Bounds for Over-parameterized Maximum Margin Classification on
  Sub-Gaussian Mixtures
Risk Bounds for Over-parameterized Maximum Margin Classification on Sub-Gaussian MixturesNeural Information Processing Systems (NeurIPS), 2021
Yuan Cao
Quanquan Gu
M. Belkin
173
56
0
28 Apr 2021
Lower Bounds on the Generalization Error of Nonlinear Learning Models
Lower Bounds on the Generalization Error of Nonlinear Learning ModelsIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2021
Inbar Seroussi
Ofer Zeitouni
179
6
0
26 Mar 2021
The Geometry of Over-parameterized Regression and Adversarial
  Perturbations
The Geometry of Over-parameterized Regression and Adversarial Perturbations
J. Rocks
Pankaj Mehta
AAML
179
10
0
25 Mar 2021
Benign Overfitting of Constant-Stepsize SGD for Linear Regression
Benign Overfitting of Constant-Stepsize SGD for Linear RegressionAnnual Conference Computational Learning Theory (COLT), 2021
Difan Zou
Jingfeng Wu
Vladimir Braverman
Quanquan Gu
Sham Kakade
223
72
0
23 Mar 2021
Comments on Leo Breiman's paper 'Statistical Modeling: The Two Cultures'
  (Statistical Science, 2001, 16(3), 199-231)
Comments on Leo Breiman's paper 'Statistical Modeling: The Two Cultures' (Statistical Science, 2001, 16(3), 199-231)
Jelena Bradic
Yinchu Zhu
90
0
0
21 Mar 2021
The Common Intuition to Transfer Learning Can Win or Lose: Case Studies
  for Linear Regression
The Common Intuition to Transfer Learning Can Win or Lose: Case Studies for Linear RegressionSIAM Journal on Mathematics of Data Science (SIMODS), 2021
Yehuda Dar
Daniel LeJeune
Richard G. Baraniuk
MLT
175
6
0
09 Mar 2021
On the Generalization Power of Overfitted Two-Layer Neural Tangent
  Kernel Models
On the Generalization Power of Overfitted Two-Layer Neural Tangent Kernel ModelsInternational Conference on Machine Learning (ICML), 2021
Peizhong Ju
Xiaojun Lin
Ness B. Shroff
MLT
152
13
0
09 Mar 2021
Asymptotics of Ridge Regression in Convolutional Models
Asymptotics of Ridge Regression in Convolutional ModelsInternational Conference on Machine Learning (ICML), 2021
Mojtaba Sahraee-Ardakan
Tung Mai
Anup B. Rao
Ryan Rossi
S. Rangan
A. Fletcher
MLT
91
2
0
08 Mar 2021
Label-Imbalanced and Group-Sensitive Classification under
  Overparameterization
Label-Imbalanced and Group-Sensitive Classification under OverparameterizationNeural Information Processing Systems (NeurIPS), 2021
Ganesh Ramachandra Kini
Orestis Paraskevas
Samet Oymak
Christos Thrampoulidis
500
112
0
02 Mar 2021
Asymptotic Risk of Overparameterized Likelihood Models: Double Descent
  Theory for Deep Neural Networks
Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Ryumei Nakada
Masaaki Imaizumi
210
2
0
28 Feb 2021
Learning curves of generic features maps for realistic datasets with a
  teacher-student model
Learning curves of generic features maps for realistic datasets with a teacher-student modelNeural Information Processing Systems (NeurIPS), 2021
Bruno Loureiro
Cédric Gerbelot
Hugo Cui
Sebastian Goldt
Florent Krzakala
M. Mézard
Lenka Zdeborová
376
153
0
16 Feb 2021
Double-descent curves in neural networks: a new perspective using
  Gaussian processes
Double-descent curves in neural networks: a new perspective using Gaussian processesAAAI Conference on Artificial Intelligence (AAAI), 2021
Ouns El Harzli
Bernardo Cuenca Grau
Guillermo Valle Pérez
A. Louis
395
6
0
14 Feb 2021
Linear Regression with Distributed Learning: A Generalization Error
  Perspective
Linear Regression with Distributed Learning: A Generalization Error PerspectiveIEEE Transactions on Signal Processing (IEEE TSP), 2021
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
FedML
292
10
0
22 Jan 2021
Adversarially Robust Estimate and Risk Analysis in Linear Regression
Adversarially Robust Estimate and Risk Analysis in Linear RegressionInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Yue Xing
Ruizhi Zhang
Guang Cheng
AAML
192
28
0
18 Dec 2020
Provable Benefits of Overparameterization in Model Compression: From
  Double Descent to Pruning Neural Networks
Provable Benefits of Overparameterization in Model Compression: From Double Descent to Pruning Neural NetworksAAAI Conference on Artificial Intelligence (AAAI), 2020
Xiangyu Chang
Yingcong Li
Samet Oymak
Christos Thrampoulidis
282
58
0
16 Dec 2020
Avoiding The Double Descent Phenomenon of Random Feature Models Using
  Hybrid Regularization
Avoiding The Double Descent Phenomenon of Random Feature Models Using Hybrid Regularization
Kelvin K. Kan
J. Nagy
Lars Ruthotto
AI4CE
131
6
0
11 Dec 2020
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel
  Theory?
Analyzing Finite Neural Networks: Can We Trust Neural Tangent Kernel Theory?
Mariia Seleznova
Gitta Kutyniok
AAML
241
36
0
08 Dec 2020
Removing Spurious Features can Hurt Accuracy and Affect Groups
  Disproportionately
Removing Spurious Features can Hurt Accuracy and Affect Groups Disproportionately
Fereshte Khani
Abigail Z. Jacobs
FaML
273
70
0
07 Dec 2020
Risk-Monotonicity in Statistical Learning
Risk-Monotonicity in Statistical LearningNeural Information Processing Systems (NeurIPS), 2020
Zakaria Mhammedi
559
8
0
28 Nov 2020
Dimensionality reduction, regularization, and generalization in
  overparameterized regressions
Dimensionality reduction, regularization, and generalization in overparameterized regressionsSIAM Journal on Mathematics of Data Science (SIMODS), 2020
Ningyuan Huang
D. Hogg
Soledad Villar
237
19
0
23 Nov 2020
Binary Classification of Gaussian Mixtures: Abundance of Support
  Vectors, Benign Overfitting and Regularization
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting and RegularizationSIAM Journal on Mathematics of Data Science (SIMODS), 2020
Ke Wang
Christos Thrampoulidis
405
32
0
18 Nov 2020
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent
  with Moderate Learning Rate
Direction Matters: On the Implicit Bias of Stochastic Gradient Descent with Moderate Learning Rate
Jingfeng Wu
Difan Zou
Vladimir Braverman
Quanquan Gu
323
18
0
04 Nov 2020
Understanding Double Descent Requires a Fine-Grained Bias-Variance
  Decomposition
Understanding Double Descent Requires a Fine-Grained Bias-Variance Decomposition
Ben Adlam
Jeffrey Pennington
UD
256
104
0
04 Nov 2020
Memorizing without overfitting: Bias, variance, and interpolation in
  over-parameterized models
Memorizing without overfitting: Bias, variance, and interpolation in over-parameterized modelsPhysical Review Research (PRResearch), 2020
J. Rocks
Pankaj Mehta
503
56
0
26 Oct 2020
Asymptotic Behavior of Adversarial Training in Binary Classification
Asymptotic Behavior of Adversarial Training in Binary Classification
Hossein Taheri
Ramtin Pedarsani
Christos Thrampoulidis
AAML
337
16
0
26 Oct 2020
Increasing Depth Leads to U-Shaped Test Risk in Over-parameterized
  Convolutional Networks
Increasing Depth Leads to U-Shaped Test Risk in Over-parameterized Convolutional Networks
Eshaan Nichani
Adityanarayanan Radhakrishnan
Caroline Uhler
269
9
0
19 Oct 2020
A Multi-resolution Theory for Approximating Infinite-$p$-Zero-$n$:
  Transitional Inference, Individualized Predictions, and a World Without
  Bias-Variance Trade-off
A Multi-resolution Theory for Approximating Infinite-ppp-Zero-nnn: Transitional Inference, Individualized Predictions, and a World Without Bias-Variance Trade-offJournal of the American Statistical Association (JASA), 2020
Xinran Li
Xiangxu Meng
145
15
0
17 Oct 2020
What causes the test error? Going beyond bias-variance via ANOVA
What causes the test error? Going beyond bias-variance via ANOVAJournal of machine learning research (JMLR), 2020
Licong Lin
Guang Cheng
284
35
0
11 Oct 2020
Temporal Difference Uncertainties as a Signal for Exploration
Temporal Difference Uncertainties as a Signal for Exploration
Sebastian Flennerhag
Jane X. Wang
Pablo Sprechmann
Francesco Visin
Alexandre Galashov
Steven Kapturowski
Diana Borsa
N. Heess
André Barreto
Razvan Pascanu
OffRL
204
16
0
05 Oct 2020
On the Universality of the Double Descent Peak in Ridgeless Regression
On the Universality of the Double Descent Peak in Ridgeless RegressionInternational Conference on Learning Representations (ICLR), 2020
David Holzmüller
504
14
0
05 Oct 2020
Benign overfitting in ridge regression
Benign overfitting in ridge regression
Alexander Tsigler
Peter L. Bartlett
309
196
0
29 Sep 2020
Experimental Design for Overparameterized Learning with Application to
  Single Shot Deep Active Learning
Experimental Design for Overparameterized Learning with Application to Single Shot Deep Active LearningIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020
N. Shoham
H. Avron
BDL
209
13
0
27 Sep 2020
On the proliferation of support vectors in high dimensions
On the proliferation of support vectors in high dimensionsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2020
Daniel J. Hsu
Vidya Muthukumar
Ji Xu
242
52
0
22 Sep 2020
Minimum discrepancy principle strategy for choosing $k$ in $k$-NN
  regression
Minimum discrepancy principle strategy for choosing kkk in kkk-NN regression
Yaroslav Averyanov
Alain Celisse
467
0
0
20 Aug 2020
The Neural Tangent Kernel in High Dimensions: Triple Descent and a
  Multi-Scale Theory of Generalization
The Neural Tangent Kernel in High Dimensions: Triple Descent and a Multi-Scale Theory of Generalization
Ben Adlam
Jeffrey Pennington
192
131
0
15 Aug 2020
On the Generalization Properties of Adversarial Training
On the Generalization Properties of Adversarial Training
Yue Xing
Qifan Song
Guang Cheng
AAML
237
36
0
15 Aug 2020
Provable More Data Hurt in High Dimensional Least Squares Estimator
Provable More Data Hurt in High Dimensional Least Squares Estimator
Zeng Li
Chuanlong Xie
Qinwen Wang
132
6
0
14 Aug 2020
The Slow Deterioration of the Generalization Error of the Random Feature
  Model
The Slow Deterioration of the Generalization Error of the Random Feature ModelMathematical and Scientific Machine Learning (MSML), 2020
Chao Ma
Lei Wu
E. Weinan
134
16
0
13 Aug 2020
What Neural Networks Memorize and Why: Discovering the Long Tail via
  Influence Estimation
What Neural Networks Memorize and Why: Discovering the Long Tail via Influence EstimationNeural Information Processing Systems (NeurIPS), 2020
Vitaly Feldman
Chiyuan Zhang
TDI
537
564
0
09 Aug 2020
Benign Overfitting and Noisy Features
Benign Overfitting and Noisy Features
Zhu Li
Weijie Su
Dino Sejdinovic
206
26
0
06 Aug 2020
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
687
63
0
03 Aug 2020
The Interpolation Phase Transition in Neural Networks: Memorization and
  Generalization under Lazy Training
The Interpolation Phase Transition in Neural Networks: Memorization and Generalization under Lazy TrainingAnnals of Statistics (Ann. Stat.), 2020
Andrea Montanari
Yiqiao Zhong
394
103
0
25 Jul 2020
Canonical thresholding for non-sparse high-dimensional linear regression
Canonical thresholding for non-sparse high-dimensional linear regressionAnnals of Statistics (Ann. Stat.), 2020
I. Silin
Jianqing Fan
173
6
0
24 Jul 2020
Previous
123456
Next