ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1703.00560
  4. Cited By
An Analytical Formula of Population Gradient for two-layered ReLU
  network and its Applications in Convergence and Critical Point Analysis

An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis

2 March 2017
Yuandong Tian
    MLT
ArXivPDFHTML

Papers citing "An Analytical Formula of Population Gradient for two-layered ReLU network and its Applications in Convergence and Critical Point Analysis"

15 / 15 papers shown
Title
Nonparametric Learning of Two-Layer ReLU Residual Units
Nonparametric Learning of Two-Layer ReLU Residual Units
Zhunxuan Wang
Linyun He
Chunchuan Lyu
Shay B. Cohen
MLT
OffRL
99
1
0
17 Aug 2020
Universal Statistics of Fisher Information in Deep Neural Networks: Mean
  Field Approach
Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach
Ryo Karakida
S. Akaho
S. Amari
FedML
106
143
0
04 Jun 2018
Learning ReLUs via Gradient Descent
Learning ReLUs via Gradient Descent
Mahdi Soltanolkotabi
MLT
61
181
0
10 May 2017
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Globally Optimal Gradient Descent for a ConvNet with Gaussian Inputs
Alon Brutzkus
Amir Globerson
MLT
94
313
0
26 Feb 2017
The Landscape of Empirical Risk for Non-convex Losses
The Landscape of Empirical Risk for Non-convex Losses
Song Mei
Yu Bai
Andrea Montanari
47
312
0
22 Jul 2016
No bad local minima: Data independent training error guarantees for
  multilayer neural networks
No bad local minima: Data independent training error guarantees for multilayer neural networks
Daniel Soudry
Y. Carmon
106
235
0
26 May 2016
Deep Learning without Poor Local Minima
Deep Learning without Poor Local Minima
Kenji Kawaguchi
ODL
149
922
0
23 May 2016
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
1.4K
192,638
0
10 Dec 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
168
18,534
0
06 Feb 2015
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
225
1,191
0
30 Nov 2014
Going Deeper with Convolutions
Going Deeper with Convolutions
Christian Szegedy
Wei Liu
Yangqing Jia
P. Sermanet
Scott E. Reed
Dragomir Anguelov
D. Erhan
Vincent Vanhoucke
Andrew Rabinovich
299
43,511
0
17 Sep 2014
Sequence to Sequence Learning with Neural Networks
Sequence to Sequence Learning with Neural Networks
Ilya Sutskever
Oriol Vinyals
Quoc V. Le
AIMat
275
20,491
0
10 Sep 2014
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
923
99,991
0
04 Sep 2014
Identifying and attacking the saddle point problem in high-dimensional
  non-convex optimization
Identifying and attacking the saddle point problem in high-dimensional non-convex optimization
Yann N. Dauphin
Razvan Pascanu
Çağlar Gülçehre
Kyunghyun Cho
Surya Ganguli
Yoshua Bengio
ODL
101
1,380
0
10 Jun 2014
Exact solutions to the nonlinear dynamics of learning in deep linear
  neural networks
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
Andrew M. Saxe
James L. McClelland
Surya Ganguli
ODL
111
1,830
0
20 Dec 2013
1