ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.13512
  4. Cited By
Learning Deep ReLU Networks Is Fixed-Parameter Tractable

Learning Deep ReLU Networks Is Fixed-Parameter Tractable

28 September 2020
Sitan Chen
Adam R. Klivans
Raghu Meka
ArXivPDFHTML

Papers citing "Learning Deep ReLU Networks Is Fixed-Parameter Tractable"

9 / 9 papers shown
Title
Tight Certified Robustness via Min-Max Representations of ReLU Neural
  Networks
Tight Certified Robustness via Min-Max Representations of ReLU Neural Networks
Brendon G. Anderson
Samuel Pfrommer
Somayeh Sojoudi
OOD
24
1
0
07 Oct 2023
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
  Polynomials
Efficiently Learning One-Hidden-Layer ReLU Networks via Schur Polynomials
Ilias Diakonikolas
D. Kane
24
4
0
24 Jul 2023
A faster and simpler algorithm for learning shallow networks
A faster and simpler algorithm for learning shallow networks
Sitan Chen
Shyam Narayanan
33
7
0
24 Jul 2023
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
91
32
0
29 Apr 2023
Operator theory, kernels, and Feedforward Neural Networks
Operator theory, kernels, and Feedforward Neural Networks
P. Jorgensen
Myung-Sin Song
James Tian
32
0
0
03 Jan 2023
Bounding the Width of Neural Networks via Coupled Initialization -- A
  Worst Case Analysis
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao-quan Song
David P. Woodruff
27
15
0
26 Jun 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
54
30
0
04 Apr 2022
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
Sitan Chen
Aravind Gollakota
Adam R. Klivans
Raghu Meka
21
30
0
10 Feb 2022
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Efficiently Learning Any One Hidden Layer ReLU Network From Queries
Sitan Chen
Adam R. Klivans
Raghu Meka
MLAU
MLT
42
8
0
08 Nov 2021
1