ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.00807
  4. Cited By
On the robustness of minimum norm interpolators and regularized
  empirical risk minimizers
v1v2v3 (latest)

On the robustness of minimum norm interpolators and regularized empirical risk minimizers

Annals of Statistics (Ann. Stat.), 2020
1 December 2020
Geoffrey Chinot
Matthias Löffler
Sara van de Geer
ArXiv (abs)PDFHTML

Papers citing "On the robustness of minimum norm interpolators and regularized empirical risk minimizers"

14 / 14 papers shown
Transfer Learning for Benign Overfitting in High-Dimensional Linear Regression
Transfer Learning for Benign Overfitting in High-Dimensional Linear Regression
Yeichan Kim
Ilmun Kim
Seyoung Park
229
0
0
17 Oct 2025
Prediction Risk and Estimation Risk of the Ridgeless Least Squares
  Estimator under General Assumptions on Regression Errors
Prediction Risk and Estimation Risk of the Ridgeless Least Squares Estimator under General Assumptions on Regression ErrorsInternational Conference on Learning Representations (ICLR), 2023
Sungyoon Lee
S. Lee
272
0
0
22 May 2023
Implicit Regularization Leads to Benign Overfitting for Sparse Linear
  Regression
Implicit Regularization Leads to Benign Overfitting for Sparse Linear RegressionInternational Conference on Machine Learning (ICML), 2023
Mo Zhou
Rong Ge
471
4
0
01 Feb 2023
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized
  Linear Models
A Non-Asymptotic Moreau Envelope Theory for High-Dimensional Generalized Linear ModelsNeural Information Processing Systems (NeurIPS), 2022
Lijia Zhou
Frederic Koehler
Pragya Sur
Danica J. Sutherland
Nathan Srebro
394
12
0
21 Oct 2022
The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in
  Transformers
The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in TransformersInternational Conference on Learning Representations (ICLR), 2022
Zong-xiao Li
Chong You
Srinadh Bhojanapalli
Daliang Li
A. S. Rawat
...
Kenneth Q Ye
Felix Chern
Felix X. Yu
Ruiqi Guo
Surinder Kumar
MoE
318
134
0
12 Oct 2022
Deep Linear Networks can Benignly Overfit when Shallow Ones Do
Deep Linear Networks can Benignly Overfit when Shallow Ones DoJournal of machine learning research (JMLR), 2022
Niladri S. Chatterji
Philip M. Long
278
11
0
19 Sep 2022
Fast Rates for Noisy Interpolation Require Rethinking the Effects of
  Inductive Bias
Fast Rates for Noisy Interpolation Require Rethinking the Effects of Inductive BiasInternational Conference on Machine Learning (ICML), 2022
Konstantin Donhauser
Nicolò Ruggeri
Stefan Stojanovic
Fanny Yang
347
25
0
07 Mar 2022
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear Data
Benign Overfitting without Linearity: Neural Network Classifiers Trained by Gradient Descent for Noisy Linear DataAnnual Conference Computational Learning Theory (COLT), 2022
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
619
90
0
11 Feb 2022
Tight bounds for minimum l1-norm interpolation of noisy data
Tight bounds for minimum l1-norm interpolation of noisy data
Guillaume Wang
Konstantin Donhauser
Fanny Yang
352
20
0
10 Nov 2021
Foolish Crowds Support Benign Overfitting
Foolish Crowds Support Benign Overfitting
Niladri S. Chatterji
Philip M. Long
348
24
0
06 Oct 2021
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of
  Overparameterized Machine Learning
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
Yehuda Dar
Vidya Muthukumar
Richard G. Baraniuk
314
79
0
06 Sep 2021
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and
  Benign Overfitting
Uniform Convergence of Interpolators: Gaussian Width, Norm Bounds, and Benign Overfitting
Frederic Koehler
Lijia Zhou
Danica J. Sutherland
Nathan Srebro
373
62
0
17 Jun 2021
Nonasymptotic theory for two-layer neural networks: Beyond the
  bias-variance trade-off
Nonasymptotic theory for two-layer neural networks: Beyond the bias-variance trade-off
Huiyuan Wang
Wei Lin
MLT
251
5
0
09 Jun 2021
AdaBoost and robust one-bit compressed sensing
AdaBoost and robust one-bit compressed sensingMathematical Statistics and Learning (MSL), 2021
Geoffrey Chinot
Felix Kuchelmeister
Matthias Löffler
Sara van de Geer
527
7
0
05 May 2021
1
Page 1 of 1