ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.06910
  4. Cited By
Characterization of Gradient Dominance and Regularity Conditions for
  Neural Networks

Characterization of Gradient Dominance and Regularity Conditions for Neural Networks

18 October 2017
Yi Zhou
Yingbin Liang
ArXivPDFHTML

Papers citing "Characterization of Gradient Dominance and Regularity Conditions for Neural Networks"

24 / 24 papers shown
Title
Porcupine Neural Networks: (Almost) All Local Optima are Global
Porcupine Neural Networks: (Almost) All Local Optima are Global
Soheil Feizi
Hamid Javadi
Jesse M. Zhang
David Tse
39
36
0
05 Oct 2017
How regularization affects the critical points in linear networks
How regularization affects the critical points in linear networks
Amirhossein Taghvaei
Jin-Won Kim
P. Mehta
38
13
0
27 Sep 2017
Theoretical insights into the optimization landscape of
  over-parameterized shallow neural networks
Theoretical insights into the optimization landscape of over-parameterized shallow neural networks
Mahdi Soltanolkotabi
Adel Javanmard
Jason D. Lee
53
417
0
16 Jul 2017
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
132
118
0
08 Jul 2017
Recovery Guarantees for One-hidden-layer Neural Networks
Recovery Guarantees for One-hidden-layer Neural Networks
Kai Zhong
Zhao Song
Prateek Jain
Peter L. Bartlett
Inderjit S. Dhillon
MLT
53
336
0
10 Jun 2017
The loss surface of deep and wide neural networks
The loss surface of deep and wide neural networks
Quynh N. Nguyen
Matthias Hein
ODL
70
284
0
26 Apr 2017
Depth Creates No Bad Local Minima
Depth Creates No Bad Local Minima
Haihao Lu
Kenji Kawaguchi
ODL
FAtt
42
121
0
27 Feb 2017
Exponentially vanishing sub-optimal local minima in multilayer neural
  networks
Exponentially vanishing sub-optimal local minima in multilayer neural networks
Daniel Soudry
Elad Hoffer
59
97
0
19 Feb 2017
Identity Matters in Deep Learning
Identity Matters in Deep Learning
Moritz Hardt
Tengyu Ma
OOD
34
398
0
14 Nov 2016
Topology and Geometry of Half-Rectified Network Optimization
Topology and Geometry of Half-Rectified Network Optimization
C. Freeman
Joan Bruna
75
235
0
04 Nov 2016
Demystifying ResNet
Demystifying ResNet
Sihan Li
Jiantao Jiao
Yanjun Han
Tsachy Weissman
45
38
0
03 Nov 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark Schmidt
168
1,208
0
16 Aug 2016
Reshaped Wirtinger Flow and Incremental Algorithm for Solving Quadratic
  System of Equations
Reshaped Wirtinger Flow and Incremental Algorithm for Solving Quadratic System of Equations
Huishuai Zhang
Yi Zhou
Ying Q. Liang
Yuejie Chi
26
133
0
25 May 2016
Deep Learning without Poor Local Minima
Deep Learning without Poor Local Minima
Kenji Kawaguchi
ODL
55
919
0
23 May 2016
Convergence Analysis for Rectangular Matrix Completion Using
  Burer-Monteiro Factorization and Gradient Descent
Convergence Analysis for Rectangular Matrix Completion Using Burer-Monteiro Factorization and Gradient Descent
Qinqing Zheng
John D. Lafferty
52
160
0
23 May 2016
Stochastic Variance Reduction for Nonconvex Optimization
Stochastic Variance Reduction for Nonconvex Optimization
Sashank J. Reddi
Ahmed S. Hefny
S. Sra
Barnabás Póczós
Alex Smola
71
598
0
19 Mar 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
673
192,638
0
10 Dec 2015
Fast low-rank estimation by projected gradient descent: General
  statistical and algorithmic guarantees
Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees
Yudong Chen
Martin J. Wainwright
74
318
0
10 Sep 2015
A Convergent Gradient Descent Algorithm for Rank Minimization and
  Semidefinite Programming from Random Linear Measurements
A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements
Qinqing Zheng
John D. Lafferty
34
186
0
19 Jun 2015
Solving Random Quadratic Systems of Equations Is Nearly as Easy as
  Solving Linear Systems
Solving Random Quadratic Systems of Equations Is Nearly as Easy as Solving Linear Systems
Yuxin Chen
Emmanuel J. Candes
41
589
0
19 May 2015
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
208
1,189
0
30 Nov 2014
Phase Retrieval via Wirtinger Flow: Theory and Algorithms
Phase Retrieval via Wirtinger Flow: Theory and Algorithms
Emmanuel Candes
Xiaodong Li
Mahdi Soltanolkotabi
60
1,282
0
03 Jul 2014
Identifying and attacking the saddle point problem in high-dimensional
  non-convex optimization
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N. Dauphin
Razvan Pascanu
Çağlar Gülçehre
Kyunghyun Cho
Surya Ganguli
Yoshua Bengio
ODL
73
1,379
0
10 Jun 2014
Complex-Valued Autoencoders
Complex-Valued Autoencoders
Pierre Baldi
Zhiqin Lu
53
59
0
20 Aug 2011
1