ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.06482
  4. Cited By
Parallel Deep Neural Networks Have Zero Duality Gap
v1v2v3 (latest)

Parallel Deep Neural Networks Have Zero Duality Gap

13 October 2021
Yifei Wang
Tolga Ergen
Mert Pilanci
ArXiv (abs)PDFHTML

Papers citing "Parallel Deep Neural Networks Have Zero Duality Gap"

11 / 11 papers shown
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in
  Polynomial Time
Convex Relaxations of ReLU Neural Networks Approximate Global Optima in Polynomial TimeInternational Conference on Machine Learning (ICML), 2024
Sungyoon Kim
Mert Pilanci
545
7
0
06 Feb 2024
Analyzing Neural Network-Based Generative Diffusion Models through
  Convex Optimization
Analyzing Neural Network-Based Generative Diffusion Models through Convex Optimization
Fangzhao Zhang
Mert Pilanci
DiffM
312
5
0
03 Feb 2024
Polynomial-Time Solutions for ReLU Network Training: A Complexity
  Classification via Max-Cut and Zonotopes
Polynomial-Time Solutions for ReLU Network Training: A Complexity Classification via Max-Cut and Zonotopes
Yifei Wang
Mert Pilanci
237
3
0
18 Nov 2023
Fixing the NTK: From Neural Network Linearizations to Exact Convex
  Programs
Fixing the NTK: From Neural Network Linearizations to Exact Convex ProgramsNeural Information Processing Systems (NeurIPS), 2023
Rajat Vadiraj Dwaraknath
Tolga Ergen
Mert Pilanci
315
0
0
26 Sep 2023
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index Models
ReLU Neural Networks with Linear Layers are Biased Towards Single- and Multi-Index ModelsSIAM Journal on Mathematics of Data Science (SIMODS), 2023
Suzanna Parkinson
Greg Ongie
Rebecca Willett
443
7
0
24 May 2023
Globally Optimal Training of Neural Networks with Threshold Activation
  Functions
Globally Optimal Training of Neural Networks with Threshold Activation FunctionsInternational Conference on Learning Representations (ICLR), 2023
Tolga Ergen
Halil Ibrahim Gulluk
Jonathan Lacotte
Mert Pilanci
270
9
0
06 Mar 2023
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone DecompositionsInternational Conference on Machine Learning (ICML), 2022
Aaron Mishkin
Arda Sahiner
Mert Pilanci
OffRL
496
33
0
02 Feb 2022
Path Regularization: A Convexity and Sparsity Inducing Regularization
  for Parallel ReLU Networks
Path Regularization: A Convexity and Sparsity Inducing Regularization for Parallel ReLU Networks
Tolga Ergen
Mert Pilanci
437
18
0
18 Oct 2021
Hidden Convexity of Wasserstein GANs: Interpretable Generative Models
  with Closed-Form Solutions
Hidden Convexity of Wasserstein GANs: Interpretable Generative Models with Closed-Form Solutions
Arda Sahiner
Tolga Ergen
Batu Mehmet Ozturkler
Burak Bartan
John M. Pauly
Morteza Mardani
Mert Pilanci
GAN
323
21
0
12 Jul 2021
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex
  Optimization Models and Implicit Regularization
Demystifying Batch Normalization in ReLU Networks: Equivalent Convex Optimization Models and Implicit RegularizationInternational Conference on Learning Representations (ICLR), 2021
Tolga Ergen
Arda Sahiner
Batu Mehmet Ozturkler
John M. Pauly
Morteza Mardani
Mert Pilanci
399
33
0
02 Mar 2021
Xception: Deep Learning with Depthwise Separable Convolutions
Xception: Deep Learning with Depthwise Separable ConvolutionsComputer Vision and Pattern Recognition (CVPR), 2016
François Chollet
MDEBDLPINN
3.0K
16,661
0
07 Oct 2016
1