ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.01897
  4. Cited By
Optimal Regularization Can Mitigate Double Descent

Optimal Regularization Can Mitigate Double Descent

4 March 2020
Preetum Nakkiran
Prayaag Venkat
Sham Kakade
Tengyu Ma
ArXivPDFHTML

Papers citing "Optimal Regularization Can Mitigate Double Descent"

29 / 29 papers shown
Title
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
The Double Descent Behavior in Two Layer Neural Network for Binary Classification
Chathurika S Abeykoon
A. Beknazaryan
Hailin Sang
51
1
0
27 Apr 2025
How more data can hurt: Instability and regularization in next-generation reservoir computing
How more data can hurt: Instability and regularization in next-generation reservoir computing
Yuanzhao Zhang
Edmilson Roque dos Santos
Sean P. Cornelius
77
2
0
28 Jan 2025
Learning Linear Dynamics from Bilinear Observations
Learning Linear Dynamics from Bilinear Observations
Yahya Sattar
Yassir Jedra
Sarah Dean
26
1
0
24 Sep 2024
Information-Theoretic Progress Measures reveal Grokking is an Emergent
  Phase Transition
Information-Theoretic Progress Measures reveal Grokking is an Emergent Phase Transition
Kenzo Clauw
S. Stramaglia
Daniele Marinazzo
50
3
0
16 Aug 2024
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance Analysis
Yufan Li
Subhabrata Sen
Ben Adlam
MLT
45
1
0
18 Apr 2024
Generalized equivalences between subsampling and ridge regularization
Generalized equivalences between subsampling and ridge regularization
Pratik V. Patil
Jin-Hong Du
26
5
0
29 May 2023
DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural
  Network Worry-Free?
DSD2^22: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?
Victor Quétu
Enzo Tartaglione
24
7
0
02 Mar 2023
Can we avoid Double Descent in Deep Neural Networks?
Can we avoid Double Descent in Deep Neural Networks?
Victor Quétu
Enzo Tartaglione
AI4CE
20
3
0
26 Feb 2023
Cliff-Learning
Cliff-Learning
T. T. Wang
I. Zablotchi
Nir Shavit
Jonathan S. Rosenfeld
26
0
0
14 Feb 2023
Gradient flow in the gaussian covariate model: exact solution of
  learning curves and multiple descent structures
Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures
Antione Bodin
N. Macris
34
4
0
13 Dec 2022
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not
  Lead to Better Performance
A Survey of Learning Curves with Bad Behavior: or How More Data Need Not Lead to Better Performance
Marco Loog
T. Viering
21
1
0
25 Nov 2022
A Solvable Model of Neural Scaling Laws
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
31
51
0
30 Oct 2022
Global Convergence of SGD On Two Layer Neural Nets
Global Convergence of SGD On Two Layer Neural Nets
Pulkit Gopalani
Anirbit Mukherjee
23
5
0
20 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
30
31
0
30 Sep 2022
Information FOMO: The unhealthy fear of missing out on information. A
  method for removing misleading data for healthier models
Information FOMO: The unhealthy fear of missing out on information. A method for removing misleading data for healthier models
Ethan Pickering
T. Sapsis
18
6
0
27 Aug 2022
Regularization-wise double descent: Why it occurs and how to eliminate
  it
Regularization-wise double descent: Why it occurs and how to eliminate it
Fatih Yilmaz
Reinhard Heckel
25
11
0
03 Jun 2022
Estimation under Model Misspecification with Fake Features
Estimation under Model Misspecification with Fake Features
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
14
11
0
07 Mar 2022
Learning Curves for Decision Making in Supervised Machine Learning: A Survey
Learning Curves for Decision Making in Supervised Machine Learning: A Survey
F. Mohr
Jan N. van Rijn
38
52
0
28 Jan 2022
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
29
71
0
06 Sep 2021
Interpolation can hurt robust generalization even when there is no noise
Interpolation can hurt robust generalization even when there is no noise
Konstantin Donhauser
Alexandru cTifrea
Michael Aerni
Reinhard Heckel
Fanny Yang
31
14
0
05 Aug 2021
Double Descent and Other Interpolation Phenomena in GANs
Double Descent and Other Interpolation Phenomena in GANs
Lorenzo Luzi
Yehuda Dar
Richard Baraniuk
18
5
0
07 Jun 2021
Towards an Understanding of Benign Overfitting in Neural Networks
Towards an Understanding of Benign Overfitting in Neural Networks
Zhu Li
Zhi-Hua Zhou
A. Gretton
MLT
33
35
0
06 Jun 2021
The Shape of Learning Curves: a Review
The Shape of Learning Curves: a Review
T. Viering
Marco Loog
18
122
0
19 Mar 2021
Low Curvature Activations Reduce Overfitting in Adversarial Training
Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla
Sahil Singla
David Jacobs
S. Feizi
AAML
32
45
0
15 Feb 2021
Multiple Descent: Design Your Own Generalization Curve
Multiple Descent: Design Your Own Generalization Curve
Lin Chen
Yifei Min
M. Belkin
Amin Karbasi
DRL
18
61
0
03 Aug 2020
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Shape Matters: Understanding the Implicit Bias of the Noise Covariance
Jeff Z. HaoChen
Colin Wei
J. Lee
Tengyu Ma
29
93
0
15 Jun 2020
Random Features for Kernel Approximation: A Survey on Algorithms,
  Theory, and Beyond
Random Features for Kernel Approximation: A Survey on Algorithms, Theory, and Beyond
Fanghui Liu
Xiaolin Huang
Yudong Chen
Johan A. K. Suykens
BDL
34
172
0
23 Apr 2020
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
Bayesian Deep Learning and a Probabilistic Perspective of Generalization
A. Wilson
Pavel Izmailov
UQCV
BDL
OOD
19
639
0
20 Feb 2020
Exact expressions for double descent and implicit regularization via
  surrogate random design
Exact expressions for double descent and implicit regularization via surrogate random design
Michal Derezinski
Feynman T. Liang
Michael W. Mahoney
11
77
0
10 Dec 2019
1