Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.01885
Cited By
Learning Two Layer Rectified Neural Networks in Polynomial Time
5 November 2018
Ainesh Bakshi
Rajesh Jayaram
David P. Woodruff
NoLa
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Learning Two Layer Rectified Neural Networks in Polynomial Time"
47 / 47 papers shown
Title
Learning Neural Networks with Distribution Shift: Efficiently Certifiable Guarantees
Gautam Chandrasekaran
Adam R. Klivans
Lin Lin Lee
Konstantinos Stavropoulos
OOD
73
1
0
22 Feb 2025
On the Hardness of Learning One Hidden Layer Neural Networks
Shuchen Li
Ilias Zadik
Manolis Zampetakis
53
2
0
04 Oct 2024
Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions
Noah Golowich
Ankur Moitra
OffRL
68
1
0
17 Jun 2024
Hardness of Learning Neural Networks under the Manifold Hypothesis
B. Kiani
Jason Wang
Melanie Weber
77
4
0
03 Jun 2024
SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning
Shuai Zhang
Heshan Devaka Fernando
Miao Liu
K. Murugesan
Songtao Lu
Pin-Yu Chen
Tianyi Chen
Meng Wang
70
2
0
24 May 2024
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial Time
Sungyoon Kim
Mert Pilanci
193
4
0
06 Feb 2024
Agnostically Learning Multi-index Models with Queries
Ilias Diakonikolas
Daniel M. Kane
Vasilis Kontonis
Christos Tzamos
Nikos Zarifis
63
4
0
27 Dec 2023
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
74
3
0
18 Nov 2023
On the Convergence and Sample Complexity Analysis of Deep Q-Networks with
ε
ε
ε
-Greedy Exploration
Shuai Zhang
Hongkang Li
Meng Wang
Miao Liu
Pin-Yu Chen
Songtao Lu
Sijia Liu
K. Murugesan
Subhajit Chaudhury
111
21
0
24 Oct 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
71
5
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
75
8
0
24 Jul 2023
Most Neural Networks Are Almost Learnable
Amit Daniely
Nathan Srebro
Gal Vardi
57
0
0
25 May 2023
Toward
L
∞
L_\infty
L
∞
-recovery of Nonlinear Functions: A Polynomial Sample Complexity Bound for Gaussian Random Fields
Kefan Dong
Tengyu Ma
88
4
0
29 Apr 2023
Learning Narrow One-Hidden-Layer ReLU Networks
Sitan Chen
Zehao Dou
Surbhi Goel
Adam R. Klivans
Raghu Meka
MLT
74
15
0
20 Apr 2023
Training a Two Layer ReLU Network Analytically
Adrian Barbu
137
6
0
06 Apr 2023
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
96
5
0
15 Feb 2023
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao Song
David P. Woodruff
97
15
0
26 Jun 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
125
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
68
31
0
10 Feb 2022
How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis
Shuai Zhang
Ming Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
SSL
MLT
111
23
0
21 Jan 2022
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
103
8
0
08 Nov 2021
An Empirical Study on Compressed Decentralized Stochastic Gradient Algorithms with Overparameterized Models
A. Rao
Hoi-To Wai
26
0
0
09 Oct 2021
Efficient Algorithms for Learning Depth-2 Neural Networks with General ReLU Activations
Pranjal Awasthi
Alex K. Tang
Aravindan Vijayaraghavan
MLT
59
21
0
21 Jul 2021
Near-Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time
Nadiia Chepurko
K. Clarkson
Praneeth Kacham
David P. Woodruff
50
10
0
16 Jul 2021
Neural Optimization Kernel: Towards Robust Deep Learning
Yueming Lyu
Ivor Tsang
53
1
0
11 Jun 2021
The Computational Complexity of ReLU Network Training Parameterized by Data Dimensionality
Vincent Froese
Christoph Hertrich
R. Niedermeier
67
24
0
18 May 2021
Training Neural Networks is
∃
R
\exists\mathbb R
∃
R
-complete
Mikkel Abrahamsen
Linda Kleist
Tillmann Miltzow
21
1
0
19 Feb 2021
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
129
32
0
20 Jan 2021
Towards Understanding Ensemble, Knowledge Distillation and Self-Distillation in Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
FedML
187
376
0
17 Dec 2020
Small Covers for Near-Zero Sets of Polynomials and Learning Latent Variable Models
Ilias Diakonikolas
D. Kane
81
33
0
14 Dec 2020
Tight Hardness Results for Training Depth-2 ReLU Networks
Surbhi Goel
Adam R. Klivans
Pasin Manurangsi
Daniel Reichman
78
41
0
27 Nov 2020
Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra
Nadiia Chepurko
K. Clarkson
L. Horesh
Honghao Lin
David P. Woodruff
60
24
0
09 Nov 2020
MixCon: Adjusting the Separability of Data Representations for Harder Data Recovery
Xiaoxiao Li
Yangsibo Huang
Binghui Peng
Zhao Song
Keqin Li
MIACV
69
1
0
22 Oct 2020
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
Sitan Chen
Adam R. Klivans
Raghu Meka
87
38
0
28 Sep 2020
Generalized Leverage Score Sampling for Neural Networks
Jason D. Lee
Ruoqi Shen
Zhao Song
Mengdi Wang
Zheng Yu
71
43
0
21 Sep 2020
Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK
Yuanzhi Li
Tengyu Ma
Hongyang R. Zhang
MLT
95
27
0
09 Jul 2020
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks
Ilias Diakonikolas
D. Kane
Vasilis Kontonis
Nikos Zarifis
76
66
0
22 Jun 2020
Training (Overparametrized) Neural Networks in Near-Linear Time
Jan van den Brand
Binghui Peng
Zhao Song
Omri Weinstein
ODL
91
83
0
20 Jun 2020
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
122
151
0
20 May 2020
Learning Polynomials of Few Relevant Dimensions
Sitan Chen
Raghu Meka
70
40
0
28 Apr 2020
A Deep Conditioning Treatment of Neural Networks
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
115
16
0
04 Feb 2020
Convex Formulation of Overparameterized Deep Neural Networks
Cong Fang
Yihong Gu
Weizhong Zhang
Tong Zhang
80
28
0
18 Nov 2019
Quadratic Suffices for Over-parametrization via Matrix Chernoff Bound
Zhao Song
Xin Yang
75
91
0
09 Jun 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
416
183
0
24 May 2019
Analysis of a Two-Layer Neural Network via Displacement Convexity
Adel Javanmard
Marco Mondelli
Andrea Montanari
MLT
119
57
0
05 Jan 2019
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
235
775
0
12 Nov 2018
Principled Deep Neural Network Training through Linear Programming
D. Bienstock
Gonzalo Muñoz
Sebastian Pokutta
89
25
0
07 Oct 2018
1