ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.05258
  4. Cited By
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
v1v2v3 (latest)

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

Neural Information Processing Systems (NeurIPS), 2022
10 February 2022
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
ArXiv (abs)PDFHTML

Papers citing "Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks"

22 / 22 papers shown
Title
Learning Neural Networks with Distribution Shift: Efficiently Certifiable GuaranteesInternational Conference on Learning Representations (ICLR), 2025
Gautam Chandrasekaran
Adam R. Klivans
Lin Lin Lee
Konstantinos Stavropoulos
OOD
166
1
0
22 Feb 2025
On the Hardness of Learning One Hidden Layer Neural Networks
On the Hardness of Learning One Hidden Layer Neural Networks
Shuchen Li
Ilias Zadik
Manolis Zampetakis
134
2
0
04 Oct 2024
Sequencing the Neurome: Towards Scalable Exact Parameter Reconstruction
  of Black-Box Neural Networks
Sequencing the Neurome: Towards Scalable Exact Parameter Reconstruction of Black-Box Neural Networks
Judah Goldfeder
Quinten Roets
Gabe Guo
John Wright
Hod Lipson
165
1
0
27 Sep 2024
Computability of Classification and Deep Learning: From Theoretical
  Limits to Practical Feasibility through Quantization
Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through QuantizationJournal of Fourier Analysis and Applications (JFAA), 2024
Holger Boche
Vít Fojtík
Adalbert Fono
Gitta Kutyniok
302
1
0
12 Aug 2024
Learning Neural Networks with Sparse Activations
Learning Neural Networks with Sparse Activations
Pranjal Awasthi
Nishanth Dikkala
Pritish Kamath
Raghu Meka
327
5
0
26 Jun 2024
Hardness of Learning Neural Networks under the Manifold Hypothesis
Hardness of Learning Neural Networks under the Manifold Hypothesis
B. Kiani
Jason Wang
Melanie Weber
199
11
0
03 Jun 2024
Exploration is Harder than Prediction: Cryptographically Separating
  Reinforcement Learning from Supervised Learning
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised LearningIEEE Annual Symposium on Foundations of Computer Science (FOCS), 2024
Noah Golowich
Ankur Moitra
Dhruv Rohatgi
OffRL
168
6
0
04 Apr 2024
Provably learning a multi-head attention layer
Provably learning a multi-head attention layer
Sitan Chen
Yuanzhi Li
MLT
214
17
0
06 Feb 2024
The Benefits of Reusing Batches for Gradient Descent in Two-Layer
  Networks: Breaking the Curse of Information and Leap Exponents
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap ExponentsInternational Conference on Machine Learning (ICML), 2024
Yatin Dandi
Emanuele Troiani
Luca Arnaboldi
Luca Pesce
Lenka Zdeborová
Florent Krzakala
MLT
246
36
0
05 Feb 2024
Looped Transformers are Better at Learning Learning Algorithms
Looped Transformers are Better at Learning Learning AlgorithmsInternational Conference on Learning Representations (ICLR), 2023
Liu Yang
Kangwook Lee
Robert D. Nowak
Dimitris Papailiopoulos
354
53
0
21 Nov 2023
Polynomial-Time Solutions for ReLU Network Training: A Complexity
  Classification via Max-Cut and Zonotopes
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
200
4
0
18 Nov 2023
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
  of Policy-Gradient Methods
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient MethodsNeural Information Processing Systems (NeurIPS), 2023
Constantine Caramanis
Eleni Psaroudaki
Alkis Kalavasis
Vasilis Kontonis
Christos Tzamos
250
5
0
08 Oct 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
  Polynomials
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur PolynomialsAnnual Conference Computational Learning Theory (COLT), 2023
Ilias Diakonikolas
D. Kane
143
6
0
24 Jul 2023
Most Neural Networks Are Almost Learnable
Most Neural Networks Are Almost LearnableNeural Information Processing Systems (NeurIPS), 2023
Amit Daniely
Nathan Srebro
Gal Vardi
176
0
0
25 May 2023
Algorithmic Decorrelation and Planted Clique in Dependent Random Graphs:
  The Case of Extra Triangles
Algorithmic Decorrelation and Planted Clique in Dependent Random Graphs: The Case of Extra TrianglesIEEE Annual Symposium on Foundations of Computer Science (FOCS), 2023
Guy Bresler
Chenghao Guo
Yury Polyanskiy
162
3
0
17 May 2023
Computational Complexity of Learning Neural Networks: Smoothness and
  Degeneracy
Computational Complexity of Learning Neural Networks: Smoothness and DegeneracyNeural Information Processing Systems (NeurIPS), 2023
Amit Daniely
Nathan Srebro
Gal Vardi
198
6
0
15 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural NetworksNeural Information Processing Systems (NeurIPS), 2022
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
288
83
0
27 Oct 2022
Magnitude and Angle Dynamics in Training Single ReLU Neurons
Magnitude and Angle Dynamics in Training Single ReLU NeuronsNeural Networks (NN), 2022
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
303
6
0
27 Sep 2022
Sampling is as easy as learning the score: theory for diffusion models
  with minimal data assumptions
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptionsInternational Conference on Learning Representations (ICLR), 2022
Sitan Chen
Sinho Chewi
Jungshian Li
Yuanzhi Li
Adil Salim
Anru R. Zhang
DiffM
468
347
0
22 Sep 2022
Learning (Very) Simple Generative Models Is Hard
Learning (Very) Simple Generative Models Is HardNeural Information Processing Systems (NeurIPS), 2022
Sitan Chen
Jungshian Li
Yuanzhi Li
140
11
0
31 May 2022
Learning ReLU networks to high uniform accuracy is intractable
Learning ReLU networks to high uniform accuracy is intractableInternational Conference on Learning Representations (ICLR), 2022
Julius Berner
Philipp Grohs
F. Voigtlaender
184
5
0
26 May 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-CompleteNeural Information Processing Systems (NeurIPS), 2022
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
250
35
0
04 Apr 2022
1