Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2202.05258
Cited By
v1
v2
v3 (latest)
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Neural Information Processing Systems (NeurIPS), 2022
10 February 2022
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks"
22 / 22 papers shown
Title
Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees
International Conference on Learning Representations (ICLR), 2025
Gautam Chandrasekaran
Adam R. Klivans
Lin Lin Lee
Konstantinos Stavropoulos
OOD
166
1
0
22 Feb 2025
On the Hardness of Learning One Hidden Layer Neural Networks
Shuchen Li
Ilias Zadik
Manolis Zampetakis
134
2
0
04 Oct 2024
Sequencing the Neurome: Towards Scalable Exact Parameter Reconstruction of Black-Box Neural Networks
Judah Goldfeder
Quinten Roets
Gabe Guo
John Wright
Hod Lipson
165
1
0
27 Sep 2024
Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization
Journal of Fourier Analysis and Applications (JFAA), 2024
Holger Boche
Vít Fojtík
Adalbert Fono
Gitta Kutyniok
302
1
0
12 Aug 2024
Learning Neural Networks with Sparse Activations
Pranjal Awasthi
Nishanth Dikkala
Pritish Kamath
Raghu Meka
327
5
0
26 Jun 2024
Hardness of Learning Neural Networks under the Manifold Hypothesis
B. Kiani
Jason Wang
Melanie Weber
199
11
0
03 Jun 2024
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning
IEEE Annual Symposium on Foundations of Computer Science (FOCS), 2024
Noah Golowich
Ankur Moitra
Dhruv Rohatgi
OffRL
168
6
0
04 Apr 2024
Provably learning a multi-head attention layer
Sitan Chen
Yuanzhi Li
MLT
214
17
0
06 Feb 2024
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents
International Conference on Machine Learning (ICML), 2024
Yatin Dandi
Emanuele Troiani
Luca Arnaboldi
Luca Pesce
Lenka Zdeborová
Florent Krzakala
MLT
246
36
0
05 Feb 2024
Looped Transformers are Better at Learning Learning Algorithms
International Conference on Learning Representations (ICLR), 2023
Liu Yang
Kangwook Lee
Robert D. Nowak
Dimitris Papailiopoulos
354
53
0
21 Nov 2023
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
200
4
0
18 Nov 2023
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods
Neural Information Processing Systems (NeurIPS), 2023
Constantine Caramanis
Eleni Psaroudaki
Alkis Kalavasis
Vasilis Kontonis
Christos Tzamos
250
5
0
08 Oct 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Annual Conference Computational Learning Theory (COLT), 2023
Ilias Diakonikolas
D. Kane
143
6
0
24 Jul 2023
Most Neural Networks Are Almost Learnable
Neural Information Processing Systems (NeurIPS), 2023
Amit Daniely
Nathan Srebro
Gal Vardi
176
0
0
25 May 2023
Algorithmic Decorrelation and Planted Clique in Dependent Random Graphs: The Case of Extra Triangles
IEEE Annual Symposium on Foundations of Computer Science (FOCS), 2023
Guy Bresler
Chenghao Guo
Yury Polyanskiy
162
3
0
17 May 2023
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Neural Information Processing Systems (NeurIPS), 2023
Amit Daniely
Nathan Srebro
Gal Vardi
198
6
0
15 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Neural Information Processing Systems (NeurIPS), 2022
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
288
83
0
27 Oct 2022
Magnitude and Angle Dynamics in Training Single ReLU Neurons
Neural Networks (NN), 2022
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
303
6
0
27 Sep 2022
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
International Conference on Learning Representations (ICLR), 2022
Sitan Chen
Sinho Chewi
Jungshian Li
Yuanzhi Li
Adil Salim
Anru R. Zhang
DiffM
468
347
0
22 Sep 2022
Learning (Very) Simple Generative Models Is Hard
Neural Information Processing Systems (NeurIPS), 2022
Sitan Chen
Jungshian Li
Yuanzhi Li
140
11
0
31 May 2022
Learning ReLU networks to high uniform accuracy is intractable
International Conference on Learning Representations (ICLR), 2022
Julius Berner
Philipp Grohs
F. Voigtlaender
184
5
0
26 May 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Neural Information Processing Systems (NeurIPS), 2022
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
250
35
0
04 Apr 2022
1