ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.03037
  4. Cited By
Why do Larger Models Generalize Better? A Theoretical Perspective via
  the XOR Problem
v1v2 (latest)

Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem

6 October 2018
Alon Brutzkus
Amir Globerson
    MLT
ArXiv (abs)PDFHTML

Papers citing "Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem"

5 / 5 papers shown
Title
Generalization in Reinforcement Learning with Selective Noise Injection
  and Information Bottleneck
Generalization in Reinforcement Learning with Selective Noise Injection and Information BottleneckNeural Information Processing Systems (NeurIPS), 2019
Maximilian Igl
K. Ciosek
Yingzhen Li
Sebastian Tschiatschek
Cheng Zhang
Sam Devlin
Katja Hofmann
OffRL
175
188
0
28 Oct 2019
On the Power and Limitations of Random Features for Understanding Neural
  Networks
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
282
187
0
01 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise
  for Overparameterized Neural Networks
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
394
374
0
27 Mar 2019
Overparameterized Nonlinear Learning: Gradient Descent Takes the
  Shortest Path?
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Samet Oymak
Mahdi Soltanolkotabi
ODL
240
184
0
25 Dec 2018
Size-Independent Sample Complexity of Neural Networks
Size-Independent Sample Complexity of Neural Networks
Noah Golowich
Alexander Rakhlin
Ohad Shamir
424
587
0
18 Dec 2017
1