ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.11621
  4. Cited By
A Universal Trade-off Between the Model Size, Test Loss, and Training
  Loss of Linear Predictors

A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors

23 July 2022
Nikhil Ghosh
M. Belkin
ArXivPDFHTML

Papers citing "A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors"

9 / 9 papers shown
Title
Feature maps for the Laplacian kernel and its generalizations
Feature maps for the Laplacian kernel and its generalizations
Sudhendu Ahir
Parthe Pandit
60
0
0
24 Feb 2025
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying
  Bandwidth or Dimensionality
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
Marko Medvedev
Gal Vardi
Nathan Srebro
68
3
0
05 Sep 2024
Near-Interpolators: Rapid Norm Growth and the Trade-Off between
  Interpolation and Generalization
Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and Generalization
Yutong Wang
Rishi Sonthalia
Wei Hu
46
5
0
12 Mar 2024
More is Better in Modern Machine Learning: when Infinite
  Overparameterization is Optimal and Overfitting is Obligatory
More is Better in Modern Machine Learning: when Infinite Overparameterization is Optimal and Overfitting is Obligatory
James B. Simon
Dhruva Karkada
Nikhil Ghosh
Mikhail Belkin
AI4CE
BDL
37
14
0
24 Nov 2023
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Noisy Interpolation Learning with Shallow Univariate ReLU Networks
Nirmit Joshi
Gal Vardi
Nathan Srebro
32
8
0
28 Jul 2023
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
35
14
0
23 May 2023
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from
  KKT Conditions for Margin Maximization
Benign Overfitting in Linear Classifiers and Leaky ReLU Networks from KKT Conditions for Margin Maximization
Spencer Frei
Gal Vardi
Peter L. Bartlett
Nathan Srebro
30
22
0
02 Mar 2023
Foolish Crowds Support Benign Overfitting
Foolish Crowds Support Benign Overfitting
Niladri S. Chatterji
Philip M. Long
83
20
0
06 Oct 2021
When is Memorization of Irrelevant Training Data Necessary for
  High-Accuracy Learning?
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Gavin Brown
Mark Bun
Vitaly Feldman
Adam D. Smith
Kunal Talwar
253
93
0
11 Dec 2020
1