Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1707.04615
Cited By
On the Complexity of Learning Neural Networks
14 July 2017
Le Song
Santosh Vempala
John Wilmes
Bo Xie
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Complexity of Learning Neural Networks"
12 / 12 papers shown
Title
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
33
4
0
15 Feb 2023
Learning ReLU networks to high uniform accuracy is intractable
Julius Berner
Philipp Grohs
F. Voigtlaender
32
4
0
26 May 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
27
30
0
10 Feb 2022
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
109
30
0
20 Jan 2021
Learning Graph Neural Networks with Approximate Gradient Descent
Qunwei Li
Shaofeng Zou
Leon Wenliang Zhong
GNN
32
1
0
07 Dec 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
22
36
0
28 Sep 2020
Superpolynomial Lower Bounds for Learning One-Layer Neural Networks using Gradient Descent
Surbhi Goel
Aravind Gollakota
Zhihan Jin
Sushrut Karmalkar
Adam R. Klivans
MLT
ODL
24
70
0
22 Jun 2020
On Tighter Generalization Bound for Deep Neural Networks: CNNs, ResNets, and Beyond
Xingguo Li
Junwei Lu
Zhaoran Wang
Jarvis Haupt
T. Zhao
27
78
0
13 Jun 2018
Adversarial examples from computational constraints
Sébastien Bubeck
Eric Price
Ilya P. Razenshteyn
AAML
65
230
0
25 May 2018
How Many Samples are Needed to Estimate a Convolutional or Recurrent Neural Network?
S. Du
Yining Wang
Xiyu Zhai
Sivaraman Balakrishnan
Ruslan Salakhutdinov
Aarti Singh
SSL
21
57
0
21 May 2018
Gradient Descent for One-Hidden-Layer Neural Networks: Polynomial Convergence and SQ Lower Bounds
Santosh Vempala
John Wilmes
MLT
16
50
0
07 May 2018
Benefits of depth in neural networks
Matus Telgarsky
153
603
0
14 Feb 2016
1