ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.12625
  4. Cited By
Good Classifiers are Abundant in the Interpolating Regime
v1v2 (latest)

Good Classifiers are Abundant in the Interpolating Regime

22 June 2020
Ryan Theisen
Jason M. Klusowski
Michael W. Mahoney
ArXiv (abs)PDFHTML

Papers citing "Good Classifiers are Abundant in the Interpolating Regime"

2 / 2 papers shown
Title
Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural
  Networks
Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks
Pascal Esser
L. C. Vankadara
Debarghya Ghoshdastidar
69
56
0
07 Dec 2021
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
134
201
0
02 Oct 2018
1