ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.10787
  4. Cited By
Complexity of Training ReLU Neural Network
v1v2 (latest)

Complexity of Training ReLU Neural Network

Discrete Optimization (Discrete Optim.), 2018
27 September 2018
Digvijay Boob
Santanu S. Dey
Guanghui Lan
ArXiv (abs)PDFHTML

Papers citing "Complexity of Training ReLU Neural Network"

26 / 26 papers shown
Algebraic Approach to Ridge-Regularized Mean Squared Error Minimization in Minimal ReLU Neural Network
Algebraic Approach to Ridge-Regularized Mean Squared Error Minimization in Minimal ReLU Neural Network
Ryoya Fukasaku
Y. Kabata
Akifumi Okuno
236
0
0
25 Aug 2025
Generative Feature Training of Thin 2-Layer Networks
Generative Feature Training of Thin 2-Layer Networks
J. Hertrich
Sebastian Neumayer
GAN
506
3
0
11 Nov 2024
Limits of Transformer Language Models on Learning to Compose Algorithms
Limits of Transformer Language Models on Learning to Compose Algorithms
Jonathan Thomm
Aleksandar Terzić
Giacomo Camposampiero
Michael Hersche
Bernhard Schölkopf
Abbas Rahimi
595
12
0
08 Feb 2024
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in
  Polynomial Time
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial TimeInternational Conference on Machine Learning (ICML), 2024
Sungyoon Kim
Mert Pilanci
588
10
0
06 Feb 2024
The Convex Landscape of Neural Networks: Characterizing Global Optima
  and Stationary Points via Lasso Models
The Convex Landscape of Neural Networks: Characterizing Global Optima and Stationary Points via Lasso Models
Tolga Ergen
Mert Pilanci
249
7
0
19 Dec 2023
Polynomial-Time Solutions for ReLU Network Training: A Complexity
  Classification via Max-Cut and Zonotopes
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
278
3
0
18 Nov 2023
AI-based soundscape analysis: Jointly identifying sound sources and
  predicting annoyance
AI-based soundscape analysis: Jointly identifying sound sources and predicting annoyanceJournal of the Acoustical Society of America (JASA), 2023
Yuanbo Hou
Qiaoqiao Ren
Huizhong Zhang
A. Mitchell
F. Aletta
Jian Kang
Dick Botteldooren
208
23
0
15 Nov 2023
Review of AlexNet for Medical Image Classification
Review of AlexNet for Medical Image Classification
Wenhao Tang
Junding Sun
Shuihua Wang
Yudong Zhang
SSeg
328
26
0
15 Nov 2023
Complexity of Neural Network Training and ETR: Extensions with
  Effectively Continuous Functions
Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous FunctionsAAAI Conference on Artificial Intelligence (AAAI), 2023
Teemu Hankala
Miika Hannula
J. Kontinen
Jonni Virtema
194
6
0
19 May 2023
When Deep Learning Meets Polyhedral Theory: A Survey
When Deep Learning Meets Polyhedral Theory: A Survey
Joey Huchette
Gonzalo Muñoz
Thiago Serra
Calvin Tsay
AI4CE
788
47
0
29 Apr 2023
Training Neural Networks is NP-Hard in Fixed Dimension
Training Neural Networks is NP-Hard in Fixed DimensionNeural Information Processing Systems (NeurIPS), 2023
Vincent Froese
Christoph Hertrich
348
22
0
29 Mar 2023
Towards Theoretical Analysis of Transformation Complexity of ReLU DNNs
Towards Theoretical Analysis of Transformation Complexity of ReLU DNNsInternational Conference on Machine Learning (ICML), 2022
Jie Ren
Mingjie Li
Meng Zhou
Shih-Han Chan
Quanshi Zhang
218
4
0
04 May 2022
Multi-Spatio-temporal Fusion Graph Recurrent Network for Traffic
  forecasting
Multi-Spatio-temporal Fusion Graph Recurrent Network for Traffic forecastingEngineering applications of artificial intelligence (EAAI), 2022
Wei Zhao
Shiqi Zhang
B. Zhou
Bei Wang
AI4TS
247
19
0
03 May 2022
Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete
Training Fully Connected Neural Networks is ∃R\exists\mathbb{R}∃R-CompleteNeural Information Processing Systems (NeurIPS), 2022
Daniel Bertschinger
Christoph Hertrich
Paul Jungeblut
Tillmann Miltzow
Simon Weber
OffRL
423
36
0
04 Apr 2022
Neural networks with linear threshold activations: structure and
  algorithms
Neural networks with linear threshold activations: structure and algorithmsConference on Integer Programming and Combinatorial Optimization (IPCO), 2021
Sammy Khalife
Hongyu Cheng
A. Basu
590
19
0
15 Nov 2021
Theory of overparametrization in quantum neural networks
Theory of overparametrization in quantum neural networksNature Computational Science (Nat. Comput. Sci.), 2021
Martín Larocca
Nathan Ju
Diego García-Martín
Patrick J. Coles
M. Cerezo
292
254
0
23 Sep 2021
Towards Lower Bounds on the Depth of ReLU Neural Networks
Towards Lower Bounds on the Depth of ReLU Neural NetworksNeural Information Processing Systems (NeurIPS), 2021
Christoph Hertrich
A. Basu
M. D. Summa
M. Skutella
657
58
0
31 May 2021
The Computational Complexity of ReLU Network Training Parameterized by
  Data Dimensionality
The Computational Complexity of ReLU Network Training Parameterized by Data DimensionalityJournal of Artificial Intelligence Research (JAIR), 2021
Vincent Froese
Christoph Hertrich
R. Niedermeier
355
32
0
18 May 2021
Training Neural Networks is $\exists\mathbb R$-complete
Training Neural Networks is ∃R\exists\mathbb R∃R-complete
Mikkel Abrahamsen
Linda Kleist
Tillmann Miltzow
185
1
0
19 Feb 2021
Tight Hardness Results for Training Depth-2 ReLU Networks
Tight Hardness Results for Training Depth-2 ReLU NetworksInformation Technology Convergence and Services (ITCS), 2020
Surbhi Goel
Adam R. Klivans
Pasin Manurangsi
Daniel Reichman
290
48
0
27 Nov 2020
Interpreting and Disentangling Feature Components of Various Complexity
  from DNNs
Interpreting and Disentangling Feature Components of Various Complexity from DNNs
Jie Ren
Mingjie Li
Zexu Liu
Quanshi Zhang
CoGe
341
20
0
29 Jun 2020
Implicit Convex Regularizers of CNN Architectures: Convex Optimization
  of Two- and Three-Layer Networks in Polynomial Time
Implicit Convex Regularizers of CNN Architectures: Convex Optimization of Two- and Three-Layer Networks in Polynomial Time
Tolga Ergen
Mert Pilanci
341
9
0
26 Jun 2020
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex
  Optimization Formulations for Two-layer Networks
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-layer NetworksInternational Conference on Machine Learning (ICML), 2020
Mert Pilanci
Tolga Ergen
404
140
0
24 Feb 2020
Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
Training Two-Layer ReLU Networks with Gradient Descent is InconsistentJournal of machine learning research (JMLR), 2020
David Holzmüller
Ingo Steinwart
MLT
260
10
0
12 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You NeedInternational Conference on Machine Learning (ICML), 2020
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
428
323
0
03 Feb 2020
Principled Deep Neural Network Training through Linear Programming
Principled Deep Neural Network Training through Linear Programming
D. Bienstock
Gonzalo Muñoz
Sebastian Pokutta
397
26
0
07 Oct 2018
1
Page 1 of 1