ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.13550
  4. Cited By
Tight Hardness Results for Training Depth-2 ReLU Networks

Tight Hardness Results for Training Depth-2 ReLU Networks

Information Technology Convergence and Services (ITCS), 2020
27 November 2020
Surbhi Goel
Adam R. Klivans
Pasin Manurangsi
Daniel Reichman
ArXiv (abs)PDFHTML

Papers citing "Tight Hardness Results for Training Depth-2 ReLU Networks"

29 / 29 papers shown
The Computational Complexity of Counting Linear Regions in ReLU Neural Networks
The Computational Complexity of Counting Linear Regions in ReLU Neural Networks
Moritz Stargalla
Christoph Hertrich
Daniel Reichman
MLT
511
2
0
22 May 2025
On the Expressiveness of Rational ReLU Neural Networks With Bounded Depth
On the Expressiveness of Rational ReLU Neural Networks With Bounded DepthInternational Conference on Learning Representations (ICLR), 2025
Gennadiy Averkov
Christopher Hojny
Maximilian Merkert
442
9
0
10 Feb 2025
Generalizability of Memorization Neural Networks
Generalizability of Memorization Neural Networks
Lijia Yu
Xiao-Shan Gao
Lijun Zhang
Yibo Miao
296
1
0
01 Nov 2024
Absence of Closed-Form Descriptions for Gradient Flow in Two-Layer
  Narrow Networks
Absence of Closed-Form Descriptions for Gradient Flow in Two-Layer Narrow Networks
Yeachan Park
AI4CE
237
0
0
15 Aug 2024
Linear Bellman Completeness Suffices for Efficient Online Reinforcement
  Learning with Few Actions
Linear Bellman Completeness Suffices for Efficient Online Reinforcement Learning with Few Actions
Noah Golowich
Ankur Moitra
OffRL
329
3
0
17 Jun 2024
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in
  Polynomial Time
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial TimeInternational Conference on Machine Learning (ICML), 2024
Sungyoon Kim
Mert Pilanci
588
10
0
06 Feb 2024
Polynomial-Time Solutions for ReLU Network Training: A Complexity
  Classification via Max-Cut and Zonotopes
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
276
3
0
18 Nov 2023
Dissecting Chain-of-Thought: Compositionality through In-Context
  Filtering and Learning
Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning
Yingcong Li
Kartik K. Sreenivasan
Angeliki Giannou
Dimitris Papailiopoulos
Samet Oymak
LRM
310
21
0
30 May 2023
Complexity of Neural Network Training and ETR: Extensions with
  Effectively Continuous Functions
Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous FunctionsAAAI Conference on Artificial Intelligence (AAAI), 2023
Teemu Hankala
Miika Hannula
J. Kontinen
Jonni Virtema
193
6
0
19 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
788
47
0
29 Apr 2023
Training Neural Networks is NP-Hard in Fixed Dimension
Training Neural Networks is NP-Hard in Fixed DimensionNeural Information Processing Systems (NeurIPS), 2023
Vincent Froese
Christoph Hertrich
345
22
0
29 Mar 2023
Finite-Sample Analysis of Learning High-Dimensional Single ReLU Neuron
Finite-Sample Analysis of Learning High-Dimensional Single ReLU NeuronInternational Conference on Machine Learning (ICML), 2023
Jingfeng Wu
Difan Zou
Zixiang Chen
Vladimir Braverman
Quanquan Gu
Sham Kakade
323
9
0
03 Mar 2023
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice
  Polytopes
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice PolytopesInternational Conference on Learning Representations (ICLR), 2023
Christian Haase
Christoph Hertrich
Georg Loho
283
31
0
24 Feb 2023
Computational Complexity of Learning Neural Networks: Smoothness and
  Degeneracy
Computational Complexity of Learning Neural Networks: Smoothness and DegeneracyNeural Information Processing Systems (NeurIPS), 2023
Amit Daniely
Nathan Srebro
Gal Vardi
281
8
0
15 Feb 2023
A Combinatorial Perspective on the Optimization of Shallow ReLU Networks
A Combinatorial Perspective on the Optimization of Shallow ReLU NetworksNeural Information Processing Systems (NeurIPS), 2022
Michael Matena
Colin Raffel
273
3
0
01 Oct 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-CompleteNeural Information Processing Systems (NeurIPS), 2022
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
405
36
0
04 Apr 2022
Neural networks with linear threshold activations: structure and
  algorithms
Neural networks with linear threshold activations: structure and algorithmsConference on Integer Programming and Combinatorial Optimization (IPCO), 2021
Sammy Khalife
Hongyu Cheng
A. Basu
587
19
0
15 Nov 2021
Path Regularization: A Convexity and Sparsity Inducing Regularization
  for Parallel ReLU Networks
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Tolga Ergen
Mert Pilanci
507
21
0
18 Oct 2021
Robust Generalization of Quadratic Neural Networks via Function
  Identification
Robust Generalization of Quadratic Neural Networks via Function Identification
Kan Xu
Hamsa Bastani
Osbert Bastani
OOD
354
9
0
22 Sep 2021
Early-stopped neural networks are consistent
Early-stopped neural networks are consistentNeural Information Processing Systems (NeurIPS), 2021
Ziwei Ji
Justin D. Li
Matus Telgarsky
262
50
0
10 Jun 2021
Learning a Single Neuron with Bias Using Gradient Descent
Learning a Single Neuron with Bias Using Gradient DescentNeural Information Processing Systems (NeurIPS), 2021
Gal Vardi
Gilad Yehudai
Ohad Shamir
MLT
360
22
0
02 Jun 2021
Towards Lower Bounds on the Depth of ReLU Neural Networks
Towards Lower Bounds on the Depth of ReLU Neural NetworksNeural Information Processing Systems (NeurIPS), 2021
Christoph Hertrich
A. Basu
M. D. Summa
M. Skutella
636
58
0
31 May 2021
The Computational Complexity of ReLU Network Training Parameterized by
  Data Dimensionality
The Computational Complexity of ReLU Network Training Parameterized by Data DimensionalityJournal of Artificial Intelligence Research (JAIR), 2021
Vincent Froese
Christoph Hertrich
R. Niedermeier
355
32
0
18 May 2021
Training Neural Networks is $\exists\mathbb R$-complete
Training Neural Networks is ∃R\exists\mathbb R∃R-complete
Mikkel Abrahamsen
Linda Kleist
Tillmann Miltzow
185
1
0
19 Feb 2021
ReLU Neural Networks of Polynomial Size for Exact Maximum Flow
  Computation
ReLU Neural Networks of Polynomial Size for Exact Maximum Flow ComputationConference on Integer Programming and Combinatorial Optimization (IPCO), 2021
Christoph Hertrich
Leon Sering
420
13
0
12 Feb 2021
From Local Pseudorandom Generators to Hardness of Learning
From Local Pseudorandom Generators to Hardness of LearningAnnual Conference Computational Learning Theory (COLT), 2021
Amit Daniely
Gal Vardi
387
39
0
20 Jan 2021
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU
  Networks
Algorithms and SQ Lower Bounds for PAC Learning One-Hidden-Layer ReLU Networks
Ilias Diakonikolas
D. Kane
Vasilis Kontonis
Nikos Zarifis
262
70
0
22 Jun 2020
Provably Good Solutions to the Knapsack Problem via Neural Networks of
  Bounded Size
Provably Good Solutions to the Knapsack Problem via Neural Networks of Bounded SizeAAAI Conference on Artificial Intelligence (AAAI), 2020
Christoph Hertrich
M. Skutella
432
28
0
28 May 2020
Principled Deep Neural Network Training through Linear Programming
Principled Deep Neural Network Training through Linear Programming
D. Bienstock
Gonzalo Muñoz
Sebastian Pokutta
387
26
0
07 Oct 2018
1
Page 1 of 1