ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.13550
  4. Cited By
Tight Hardness Results for Training Depth-2 ReLU Networks

Tight Hardness Results for Training Depth-2 ReLU Networks

27 November 2020
Surbhi Goel
Adam R. Klivans
Pasin Manurangsi
Daniel Reichman
ArXiv (abs)PDFHTML

Papers citing "Tight Hardness Results for Training Depth-2 ReLU Networks"

29 / 29 papers shown
Title
The Computational Complexity of Counting Linear Regions in ReLU Neural Networks
The Computational Complexity of Counting Linear Regions in ReLU Neural Networks
Moritz Stargalla
Christoph Hertrich
Daniel Reichman
MLT
35
0
0
22 May 2025
On the Expressiveness of Rational ReLU Neural Networks With Bounded Depth
Gennadiy Averkov
Christopher Hojny
Maximilian Merkert
135
4
0
10 Feb 2025
Generalizability of Memorization Neural Networks
Generalizability of Memorization Neural Networks
Lijia Yu
Xiao-Shan Gao
Lijun Zhang
Yibo Miao
99
1
0
01 Nov 2024
Absence of Closed-Form Descriptions for Gradient Flow in Two-Layer
  Narrow Networks
Absence of Closed-Form Descriptions for Gradient Flow in Two-Layer Narrow Networks
Yeachan Park
AI4CE
112
0
0
15 Aug 2024
Linear Bellman Completeness Suffices for Efficient Online Reinforcement
  Learning with Few Actions
Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions
Noah Golowich
Ankur Moitra
OffRL
68
1
0
17 Jun 2024
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in
  Polynomial Time
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial Time
Sungyoon Kim
Mert Pilanci
191
4
0
06 Feb 2024
Polynomial-Time Solutions for ReLU Network Training: A Complexity
  Classification via Max-Cut and Zonotopes
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
74
3
0
18 Nov 2023
Dissecting Chain-of-Thought: Compositionality through In-Context
  Filtering and Learning
Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning
Yingcong Li
Kartik K. Sreenivasan
Angeliki Giannou
Dimitris Papailiopoulos
Samet Oymak
LRM
113
18
0
30 May 2023
Complexity of Neural Network Training and ETR: Extensions with
  Effectively Continuous Functions
Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous Functions
Teemu Hankala
Miika Hannula
J. Kontinen
Jonni Virtema
46
5
0
19 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
160
37
0
29 Apr 2023
Training Neural Networks is NP-Hard in Fixed Dimension
Training Neural Networks is NP-Hard in Fixed Dimension
Vincent Froese
Christoph Hertrich
110
10
0
29 Mar 2023
Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron
Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron
Jingfeng Wu
Difan Zou
Zixiang Chen
Vladimir Braverman
Quanquan Gu
Sham Kakade
132
7
0
03 Mar 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice
  Polytopes
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
Christian Haase
Christoph Hertrich
Georg Loho
79
22
0
24 Feb 2023
Computational Complexity of Learning Neural Networks: Smoothness and
  Degeneracy
Computational Complexity of Learning Neural Networks: Smoothness and Degeneracy
Amit Daniely
Nathan Srebro
Gal Vardi
96
5
0
15 Feb 2023
A Combinatorial Perspective on the Optimization of Shallow ReLU Networks
A Combinatorial Perspective on the Optimization of Shallow ReLU Networks
Michael Matena
Colin Raffel
32
1
0
01 Oct 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-Complete
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
123
30
0
04 Apr 2022
Neural networks with linear threshold activations: structure and
  algorithms
Neural networks with linear threshold activations: structure and algorithms
Sammy Khalife
Hongyu Cheng
A. Basu
105
16
0
15 Nov 2021
Path Regularization: A Convexity and Sparsity Inducing Regularization
  for Parallel ReLU Networks
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Tolga Ergen
Mert Pilanci
94
16
0
18 Oct 2021
Robust Generalization of Quadratic Neural Networks via Function
  Identification
Robust Generalization of Quadratic Neural Networks via Function Identification
Kan Xu
Hamsa Bastani
Osbert Bastani
OOD
67
8
0
22 Sep 2021
Early-stopped neural networks are consistent
Early-stopped neural networks are consistent
Ziwei Ji
Justin D. Li
Matus Telgarsky
85
37
0
10 Jun 2021
Learning a Single Neuron with Bias Using Gradient Descent
Learning a Single Neuron with Bias Using Gradient Descent
Gal Vardi
Gilad Yehudai
Ohad Shamir
MLT
84
17
0
02 Jun 2021
Towards Lower Bounds on the Depth of ReLU Neural Networks
Towards Lower Bounds on the Depth of ReLU Neural Networks
Christoph Hertrich
A. Basu
M. D. Summa
M. Skutella
108
43
0
31 May 2021
The Computational Complexity of ReLU Network Training Parameterized by
  Data Dimensionality
The Computational Complexity of ReLU Network Training Parameterized by Data Dimensionality
Vincent Froese
Christoph Hertrich
R. Niedermeier
67
24
0
18 May 2021
Training Neural Networks is $\exists\mathbb R$-complete
Training Neural Networks is ∃R\exists\mathbb R∃R-complete
Mikkel Abrahamsen
Linda Kleist
Tillmann Miltzow
21
1
0
19 Feb 2021
ReLU Neural Networks of Polynomial Size for Exact Maximum Flow
  Computation
ReLU Neural Networks of Polynomial Size for Exact Maximum Flow Computation
Christoph Hertrich
Leon Sering
111
10
0
12 Feb 2021
From Local Pseudorandom Generators to Hardness of Learning
From Local Pseudorandom Generators to Hardness of Learning
Amit Daniely
Gal Vardi
129
32
0
20 Jan 2021
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU
  Networks
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks
Ilias Diakonikolas
D. Kane
Vasilis Kontonis
Nikos Zarifis
76
66
0
22 Jun 2020
Provably Good Solutions to the Knapsack Problem via Neural Networks of
  Bounded Size
Provably Good Solutions to the Knapsack Problem via Neural Networks of Bounded Size
Christoph Hertrich
M. Skutella
99
22
0
28 May 2020
Principled Deep Neural Network Training through Linear Programming
Principled Deep Neural Network Training through Linear Programming
D. Bienstock
Gonzalo Muñoz
Sebastian Pokutta
89
25
0
07 Oct 2018
1