Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2003.04180
Cited By
Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
Annual Conference Computational Learning Theory (COLT), 2020
9 March 2020
Pritish Kamath
Omar Montasser
Nathan Srebro
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity"
24 / 24 papers shown
Dimension lower bounds for linear approaches to function approximation
Daniel Hsu
86
5
0
18 Aug 2025
The Power of Random Features and the Limits of Distribution-Free Gradient Descent
Ari Karchmer
Eran Malach
296
0
0
15 May 2025
Spherical dimension
Annual Conference Computational Learning Theory (COLT), 2025
Bogdan Chornomaz
Shay Moran
Tom Waknine
201
2
0
13 Mar 2025
On Reductions and Representations of Learning Problems in Euclidean Spaces
Symposium on the Theory of Computing (STOC), 2024
Bogdan Chornomaz
Shay Moran
Tom Waknine
108
1
0
16 Nov 2024
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
Jason D. Lee
Kazusato Oko
Taiji Suzuki
Denny Wu
MLT
376
33
0
03 Jun 2024
RedEx: Beyond Fixed Representation Methods via Convex Optimization
International Conference on Algorithmic Learning Theory (ALT), 2024
Amit Daniely
Mariano Schain
Gilad Yehudai
217
1
0
15 Jan 2024
Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
264
12
0
07 Sep 2023
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
International Conference on Machine Learning (ICML), 2023
Liam Collins
Hamed Hassani
Mahdi Soltanolkotabi
Aryan Mokhtari
Sanjay Shakkottai
528
13
0
13 Jul 2023
SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Annual Conference Computational Learning Theory (COLT), 2023
Emmanuel Abbe
Enric Boix-Adserà
Theodor Misiakiewicz
FedML
MLT
335
114
0
21 Feb 2023
On the non-universality of deep learning: quantifying the cost of symmetry
Neural Information Processing Systems (NeurIPS), 2022
Emmanuel Abbe
Enric Boix-Adserà
FedML
MLT
279
21
0
05 Aug 2022
Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Neural Information Processing Systems (NeurIPS), 2022
Boaz Barak
Benjamin L. Edelman
Surbhi Goel
Sham Kakade
Eran Malach
Cyril Zhang
382
161
0
18 Jul 2022
Random Feature Amplification: Feature Learning and Generalization in Neural Networks
Journal of machine learning research (JMLR), 2022
Spencer Frei
Niladri S. Chatterji
Peter L. Bartlett
MLT
315
33
0
15 Feb 2022
Quantum machine learning beyond kernel methods
Nature Communications (Nat Commun), 2021
Sofiene Jerbi
Lukas J. Fiderer
Hendrik Poulsen Nautrup
Jonas M. Kubler
Hans J. Briegel
Vedran Dunjko
313
224
0
25 Oct 2021
The Eigenlearning Framework: A Conservation Law Perspective on Kernel Regression and Wide Neural Networks
James B. Simon
Madeline Dickens
Dhruva Karkada
M. DeWeese
521
28
0
08 Oct 2021
Reconstruction on Trees and Low-Degree Polynomials
Frederic Koehler
Elchanan Mossel
343
12
0
14 Sep 2021
Learning a Single Neuron with Bias Using Gradient Descent
Neural Information Processing Systems (NeurIPS), 2021
Gal Vardi
Gilad Yehudai
Ohad Shamir
MLT
269
22
0
02 Jun 2021
Understanding the Eluder Dimension
Neural Information Processing Systems (NeurIPS), 2021
Gen Li
Pritish Kamath
Dylan J. Foster
Nathan Srebro
419
16
0
14 Apr 2021
Quantifying the Benefit of Using Differentiable Learning over Tangent Kernels
International Conference on Machine Learning (ICML), 2021
Eran Malach
Pritish Kamath
Emmanuel Abbe
Nathan Srebro
234
43
0
01 Mar 2021
On the Approximation Power of Two-Layer Networks of Random ReLUs
Annual Conference Computational Learning Theory (COLT), 2021
Daniel J. Hsu
Clayton Sanford
Rocco A. Servedio
Emmanouil-Vasileios Vlatakis-Gkaragkounis
167
29
0
03 Feb 2021
The Connection Between Approximation, Depth Separation and Learnability in Neural Networks
Annual Conference Computational Learning Theory (COLT), 2021
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
216
22
0
31 Jan 2021
The Polynomial Method is Universal for Distribution-Free Correlational SQ Learning
Aravind Gollakota
Sushrut Karmalkar
Adam R. Klivans
320
2
0
22 Oct 2020
A case where a spindly two-layer linear network whips any neural network with a fully connected input layer
Manfred K. Warmuth
W. Kotłowski
Ehsan Amid
MLT
131
1
0
16 Oct 2020
When Hardness of Approximation Meets Hardness of Learning
Eran Malach
Shai Shalev-Shwartz
168
10
0
18 Aug 2020
The Effects of Mild Over-parameterization on the Optimization Landscape of Shallow ReLU Neural Networks
Annual Conference Computational Learning Theory (COLT), 2020
Itay Safran
Gilad Yehudai
Ohad Shamir
355
39
0
01 Jun 2020
1
Page 1 of 1