ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07808
  4. Cited By
Learning One-hidden-layer ReLU Networks via Gradient Descent

Learning One-hidden-layer ReLU Networks via Gradient Descent

20 June 2018
Xiao Zhang
Yaodong Yu
Lingxiao Wang
Quanquan Gu
    MLT
ArXivPDFHTML

Papers citing "Learning One-hidden-layer ReLU Networks via Gradient Descent"

36 / 86 papers shown
Title
Directional Pruning of Deep Neural Networks
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
6
33
0
16 Jun 2020
Agnostic Learning of a Single Neuron with Gradient Descent
Agnostic Learning of a Single Neuron with Gradient Descent
Spencer Frei
Yuan Cao
Quanquan Gu
MLT
16
59
0
29 May 2020
Feature Purification: How Adversarial Training Performs Robust Deep
  Learning
Feature Purification: How Adversarial Training Performs Robust Deep Learning
Zeyuan Allen-Zhu
Yuanzhi Li
MLT
AAML
27
146
0
20 May 2020
Piecewise linear activations substantially shape the loss surfaces of
  neural networks
Piecewise linear activations substantially shape the loss surfaces of neural networks
Fengxiang He
Bohan Wang
Dacheng Tao
ODL
20
28
0
27 Mar 2020
Tune smarter not harder: A principled approach to tuning learning rates
  for shallow nets
Tune smarter not harder: A principled approach to tuning learning rates for shallow nets
Thulasi Tholeti
Sheetal Kalyani
13
4
0
22 Mar 2020
On the Global Convergence of Training Deep Linear ResNets
On the Global Convergence of Training Deep Linear ResNets
Difan Zou
Philip M. Long
Quanquan Gu
18
37
0
02 Mar 2020
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural
  Networks
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks
Zixiang Chen
Yuan Cao
Quanquan Gu
Tong Zhang
MLT
19
10
0
10 Feb 2020
Sharp Rate of Convergence for Deep Neural Network Classifiers under the
  Teacher-Student Setting
Sharp Rate of Convergence for Deep Neural Network Classifiers under the Teacher-Student Setting
Tianyang Hu
Zuofeng Shang
Guang Cheng
19
19
0
19 Jan 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
14
168
0
19 Dec 2019
Tight Sample Complexity of Learning One-hidden-layer Convolutional
  Neural Networks
Tight Sample Complexity of Learning One-hidden-layer Convolutional Neural Networks
Yuan Cao
Quanquan Gu
MLT
12
19
0
12 Nov 2019
Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian
  Marginals
Time/Accuracy Tradeoffs for Learning a ReLU with respect to Gaussian Marginals
Surbhi Goel
Sushrut Karmalkar
Adam R. Klivans
10
53
0
04 Nov 2019
Growing axons: greedy learning of neural networks with application to
  function approximation
Growing axons: greedy learning of neural networks with application to function approximation
Daria Fokina
Ivan V. Oseledets
9
18
0
28 Oct 2019
Nearly Minimal Over-Parametrization of Shallow Neural Networks
Armin Eftekhari
Chaehwan Song
V. Cevher
8
1
0
09 Oct 2019
Theoretical Issues in Deep Networks: Approximation, Optimization and
  Generalization
Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization
T. Poggio
Andrzej Banburski
Q. Liao
ODL
21
161
0
25 Aug 2019
An Improved Analysis of Training Over-parameterized Deep Neural Networks
An Improved Analysis of Training Over-parameterized Deep Neural Networks
Difan Zou
Quanquan Gu
16
230
0
11 Jun 2019
Fast Convergence of Natural Gradient Descent for Overparameterized
  Neural Networks
Fast Convergence of Natural Gradient Descent for Overparameterized Neural Networks
Guodong Zhang
James Martens
Roger C. Grosse
ODL
6
124
0
27 May 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural
  Networks on Classification Problems
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
8
33
0
23 May 2019
Theory III: Dynamics and Generalization in Deep Networks
Theory III: Dynamics and Generalization in Deep Networks
Andrzej Banburski
Q. Liao
Brando Miranda
Lorenzo Rosasco
Fernanda De La Torre
Jack Hidary
T. Poggio
AI4CE
19
3
0
12 Mar 2019
Generalization Error Bounds of Gradient Descent for Learning
  Over-parameterized Deep ReLU Networks
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Yuan Cao
Quanquan Gu
ODL
MLT
AI4CE
9
155
0
04 Feb 2019
Fine-Grained Analysis of Optimization and Generalization for
  Overparameterized Two-Layer Neural Networks
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
20
961
0
24 Jan 2019
Width Provably Matters in Optimization for Deep Linear Neural Networks
Width Provably Matters in Optimization for Deep Linear Neural Networks
S. Du
Wei Hu
11
93
0
24 Jan 2019
Fitting ReLUs via SGD and Quantized SGD
Fitting ReLUs via SGD and Quantized SGD
Seyed Mohammadreza Mousavi Kalan
Mahdi Soltanolkotabi
A. Avestimehr
8
24
0
19 Jan 2019
Convex Relaxations of Convolutional Neural Nets
Convex Relaxations of Convolutional Neural Nets
Burak Bartan
Mert Pilanci
12
5
0
31 Dec 2018
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU
  Networks
Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks
Difan Zou
Yuan Cao
Dongruo Zhou
Quanquan Gu
ODL
11
446
0
21 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
J. Lee
Haochuan Li
Liwei Wang
M. Tomizuka
ODL
13
1,120
0
09 Nov 2018
Subgradient Descent Learns Orthogonal Dictionaries
Subgradient Descent Learns Orthogonal Dictionaries
Yu Bai
Qijia Jiang
Ju Sun
10
51
0
25 Oct 2018
Small ReLU networks are powerful memorizers: a tight analysis of
  memorization capacity
Small ReLU networks are powerful memorizers: a tight analysis of memorization capacity
Chulhee Yun
S. Sra
Ali Jadbabaie
13
117
0
17 Oct 2018
Learning Two-layer Neural Networks with Symmetric Inputs
Learning Two-layer Neural Networks with Symmetric Inputs
Rong Ge
Rohith Kuditipudi
Zhize Li
Xiang Wang
OOD
MLT
25
57
0
16 Oct 2018
Learning One-hidden-layer Neural Networks under General Input
  Distributions
Learning One-hidden-layer Neural Networks under General Input Distributions
Weihao Gao
Ashok Vardhan Makkuva
Sewoong Oh
Pramod Viswanath
MLT
25
28
0
09 Oct 2018
Efficiently testing local optimality and escaping saddles for ReLU
  networks
Efficiently testing local optimality and escaping saddles for ReLU networks
Chulhee Yun
S. Sra
Ali Jadbabaie
22
10
0
28 Sep 2018
Learning ReLU Networks on Linearly Separable Data: Algorithm,
  Optimality, and Generalization
Learning ReLU Networks on Linearly Separable Data: Algorithm, Optimality, and Generalization
G. Wang
G. Giannakis
Jie Chen
MLT
22
131
0
14 Aug 2018
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross
  Entropy
Guaranteed Recovery of One-Hidden-Layer Neural Networks via Cross Entropy
H. Fu
Yuejie Chi
Yingbin Liang
FedML
11
39
0
18 Feb 2018
Small nonlinearities in activation functions create bad local minima in
  neural networks
Small nonlinearities in activation functions create bad local minima in neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
ODL
15
93
0
10 Feb 2018
Global optimality conditions for deep neural networks
Global optimality conditions for deep neural networks
Chulhee Yun
S. Sra
Ali Jadbabaie
121
117
0
08 Jul 2017
Benefits of depth in neural networks
Benefits of depth in neural networks
Matus Telgarsky
125
602
0
14 Feb 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
177
1,185
0
30 Nov 2014
Previous
12