ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.01851
  4. Cited By
On the Universality of the Double Descent Peak in Ridgeless Regression
v1v2v3v4v5v6v7v8 (latest)

On the Universality of the Double Descent Peak in Ridgeless Regression

International Conference on Learning Representations (ICLR), 2020
5 October 2020
David Holzmüller
ArXiv (abs)PDFHTML

Papers citing "On the Universality of the Double Descent Peak in Ridgeless Regression"

9 / 9 papers shown
Title
Models of Heavy-Tailed Mechanistic Universality
Models of Heavy-Tailed Mechanistic Universality
Liam Hodgkinson
Zhichao Wang
Michael W. Mahoney
221
3
0
04 Jun 2025
Overparameterized Multiple Linear Regression as Hyper-Curve Fitting
Overparameterized Multiple Linear Regression as Hyper-Curve Fitting
E. Atza
N. Budko
118
2
0
11 Apr 2024
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimensionNeural Information Processing Systems (NeurIPS), 2023
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
286
22
0
23 May 2023
Regularization Trade-offs with Fake Features
Regularization Trade-offs with Fake FeaturesEuropean Signal Processing Conference (EUSIPCO), 2022
Martin Hellkvist
Ayça Özçelikkale
Anders Ahlén
305
0
0
01 Dec 2022
Monotonicity and Double Descent in Uncertainty Estimation with Gaussian
  Processes
Monotonicity and Double Descent in Uncertainty Estimation with Gaussian ProcessesInternational Conference on Machine Learning (ICML), 2022
Liam Hodgkinson
Christopher van der Heide
Fred Roosta
Michael W. Mahoney
UQCV
221
7
0
14 Oct 2022
On the Impossible Safety of Large AI Models
On the Impossible Safety of Large AI Models
El-Mahdi El-Mhamdi
Sadegh Farhadkhani
R. Guerraoui
Nirupam Gupta
L. Hoang
Rafael Pinot
Sébastien Rouault
John Stephan
285
37
0
30 Sep 2022
A Universal Trade-off Between the Model Size, Test Loss, and Training
  Loss of Linear Predictors
A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear PredictorsSIAM Journal on Mathematics of Data Science (SIMODS), 2022
Nikhil Ghosh
M. Belkin
262
7
0
23 Jul 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
284
39
0
14 Jul 2022
Data splitting improves statistical performance in overparametrized
  regimes
Data splitting improves statistical performance in overparametrized regimesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Nicole Mücke
Enrico Reiss
Jonas Rungenhagen
Markus Klein
155
8
0
21 Oct 2021
1