ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.14646
  4. Cited By
More is Better in Modern Machine Learning: when Infinite
  Overparameterization is Optimal and Overfitting is Obligatory

More is Better in Modern Machine Learning: when Infinite Overparameterization is Optimal and Overfitting is Obligatory

24 November 2023
James B. Simon
Dhruva Karkada
Nikhil Ghosh
Mikhail Belkin
    AI4CE
    BDL
ArXivPDFHTML

Papers citing "More is Better in Modern Machine Learning: when Infinite Overparameterization is Optimal and Overfitting is Obligatory"

5 / 5 papers shown
Title
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
High-dimensional Analysis of Knowledge Distillation: Weak-to-Strong Generalization and Scaling Laws
M. E. Ildiz
Halil Alperen Gozeten
Ege Onur Taga
Marco Mondelli
Samet Oymak
51
2
0
24 Oct 2024
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
49
12
0
26 Sep 2024
The Eigenlearning Framework: A Conservation Law Perspective on Kernel
  Regression and Wide Neural Networks
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks
James B. Simon
Madeline Dickens
Dhruva Karkada
M. DeWeese
40
26
0
08 Oct 2021
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
131
199
0
07 Feb 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,424
0
23 Jan 2020
1