Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2009.13512
Cited By
Learning Deep ReLU Networks Is Fixed-Parameter Tractable
28 September 2020
Sitan Chen
Adam R. Klivans
Raghu Meka
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Learning Deep ReLU Networks Is Fixed-Parameter Tractable"
9 / 9 papers shown
Title
Tight Certified Robustness via Min-Max Representations of ReLU Neural Networks
Brendon G. Anderson
Samuel Pfrommer
Somayeh Sojoudi
OOD
24
1
0
07 Oct 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
24
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
33
7
0
24 Jul 2023
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
91
32
0
29 Apr 2023
Operator theory, kernels, and Feedforward Neural Networks
P. Jorgensen
Myung-Sin Song
James Tian
32
0
0
03 Jan 2023
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao-quan Song
David P. Woodruff
27
15
0
26 Jun 2022
Training Fully Connected Neural Networks is
∃
R
\exists\mathbb{R}
∃
R
-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
54
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
21
30
0
10 Feb 2022
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
42
8
0
08 Nov 2021
1