Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
1810.03037
Cited By
v1
v2 (latest)
Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem
6 October 2018
Alon Brutzkus
Amir Globerson
MLT
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Why do Larger Models Generalize Better? A Theoretical Perspective via the XOR Problem"
5 / 5 papers shown
Title
Generalization in Reinforcement Learning with Selective Noise Injection and Information Bottleneck
Neural Information Processing Systems (NeurIPS), 2019
Maximilian Igl
K. Ciosek
Yingzhen Li
Sebastian Tschiatschek
Cheng Zhang
Sam Devlin
Katja Hofmann
OffRL
175
188
0
28 Oct 2019
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
282
187
0
01 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
394
374
0
27 Mar 2019
Overparameterized Nonlinear Learning: Gradient Descent Takes the Shortest Path?
Samet Oymak
Mahdi Soltanolkotabi
ODL
240
184
0
25 Dec 2018
Size-Independent Sample Complexity of Neural Networks
Noah Golowich
Alexander Rakhlin
Ohad Shamir
424
587
0
18 Dec 2017
1