ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.05258
  4. Cited By
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks

10 February 2022
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
ArXivPDFHTML

Papers citing "Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks"

23 / 23 papers shown
Title
Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees
Gautam Chandrasekaran
Adam R. Klivans
Lin Lin Lee
Konstantinos Stavropoulos
OOD
40
0
0
22 Feb 2025
On the Hardness of Learning One Hidden Layer Neural Networks
On the Hardness of Learning One Hidden Layer Neural Networks
Shuchen Li
Ilias Zadik
Manolis Zampetakis
21
2
0
04 Oct 2024
Sequencing the Neurome: Towards Scalable Exact Parameter Reconstruction
  of Black-Box Neural Networks
Sequencing the Neurome: Towards Scalable Exact Parameter Reconstruction of Black-Box Neural Networks
Judah Goldfeder
Quinten Roets
Gabe Guo
John Wright
Hod Lipson
33
1
0
27 Sep 2024
Computability of Classification and Deep Learning: From Theoretical
  Limits to Practical Feasibility through Quantization
Computability of Classification and Deep Learning: From Theoretical Limits to Practical Feasibility through Quantization
Holger Boche
Vít Fojtík
Adalbert Fono
Gitta Kutyniok
35
0
0
12 Aug 2024
Learning Neural Networks with Sparse Activations
Learning Neural Networks with Sparse Activations
Pranjal Awasthi
Nishanth Dikkala
Pritish Kamath
Raghu Meka
36
2
0
26 Jun 2024
Hardness of Learning Neural Networks under the Manifold Hypothesis
Hardness of Learning Neural Networks under the Manifold Hypothesis
B. Kiani
Jason Wang
Melanie Weber
34
2
0
03 Jun 2024
Exploration is Harder than Prediction: Cryptographically Separating
  Reinforcement Learning from Supervised Learning
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning
Noah Golowich
Ankur Moitra
Dhruv Rohatgi
OffRL
32
4
0
04 Apr 2024
The Benefits of Reusing Batches for Gradient Descent in Two-Layer
  Networks: Breaking the Curse of Information and Leap Exponents
The Benefits of Reusing Batches for Gradient Descent in Two-Layer Networks: Breaking the Curse of Information and Leap Exponents
Yatin Dandi
Emanuele Troiani
Luca Arnaboldi
Luca Pesce
Lenka Zdeborová
Florent Krzakala
MLT
61
25
0
05 Feb 2024
Looped Transformers are Better at Learning Learning Algorithms
Looped Transformers are Better at Learning Learning Algorithms
Liu Yang
Kangwook Lee
Robert D. Nowak
Dimitris Papailiopoulos
24
24
0
21 Nov 2023
Polynomial-Time Solutions for ReLU Network Training: A Complexity
  Classification via Max-Cut and Zonotopes
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
26
3
0
18 Nov 2023
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
  of Policy-Gradient Methods
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods
C. Caramanis
Dimitris Fotakis
Alkis Kalavasis
Vasilis Kontonis
Christos Tzamos
11
5
0
08 Oct 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
  Polynomials
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
24
4
0
24 Jul 2023
Most Neural Networks Are Almost Learnable
Most Neural Networks Are Almost Learnable
Amit Daniely
Nathan Srebro
Gal Vardi
18
0
0
25 May 2023
Algorithmic Decorrelation and Planted Clique in Dependent Random Graphs:
  The Case of Extra Triangles
Algorithmic Decorrelation and Planted Clique in Dependent Random Graphs: The Case of Extra Triangles
Guy Bresler
Chenghao Guo
Yury Polyanskiy
34
1
0
17 May 2023
Computational Complexity of Learning Neural Networks: Smoothness and
  Degeneracy
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
12
4
0
15 Feb 2023
Learning Single-Index Models with Shallow Neural Networks
Learning Single-Index Models with Shallow Neural Networks
A. Bietti
Joan Bruna
Clayton Sanford
M. Song
164
67
0
27 Oct 2022
Magnitude and Angle Dynamics in Training Single ReLU Neurons
Magnitude and Angle Dynamics in Training Single ReLU Neurons
Sangmin Lee
Byeongsu Sim
Jong Chul Ye
MLT
94
6
0
27 Sep 2022
Sampling is as easy as learning the score: theory for diffusion models
  with minimal data assumptions
Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions
Sitan Chen
Sinho Chewi
Jungshian Li
Yuanzhi Li
Adil Salim
Anru R. Zhang
DiffM
132
246
0
22 Sep 2022
Learning (Very) Simple Generative Models Is Hard
Learning (Very) Simple Generative Models Is Hard
Sitan Chen
Jungshian Li
Yuanzhi Li
17
9
0
31 May 2022
Learning ReLU networks to high uniform accuracy is intractable
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
32
4
0
26 May 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
54
30
0
04 Apr 2022
Size and Depth Separation in Approximating Benign Functions with Neural
  Networks
Size and Depth Separation in Approximating Benign Functions with Neural Networks
Gal Vardi
Daniel Reichman
T. Pitassi
Ohad Shamir
21
7
0
30 Jan 2021
From Local Pseudorandom Generators to Hardness of Learning
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
1