Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.07572
Cited By
Neural Tangent Kernel: Convergence and Generalization in Neural Networks
20 June 2018
Arthur Jacot
Franck Gabriel
Clément Hongler
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Neural Tangent Kernel: Convergence and Generalization in Neural Networks"
48 / 2,148 papers shown
Title
On Exact Computation with an Infinitely Wide Neural Net
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
42
901
0
26 Apr 2019
Implicit regularization for deep neural networks driven by an Ornstein-Uhlenbeck like process
Guy Blanc
Neha Gupta
Gregory Valiant
Paul Valiant
11
142
0
19 Apr 2019
The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent
Karthik A. Sankararaman
Soham De
Zheng Xu
Yifan Jiang
Tom Goldstein
ODL
19
103
0
15 Apr 2019
A Selective Overview of Deep Learning
Jianqing Fan
Cong Ma
Yiqiao Zhong
BDL
VLM
28
136
0
10 Apr 2019
Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections
E. Weinan
Chao Ma
Qingcan Wang
Lei Wu
MLT
27
22
0
10 Apr 2019
A Comparative Analysis of the Optimization and Generalization Property of Two-layer Neural Network and Random Feature Models Under Gradient Descent Dynamics
E. Weinan
Chao Ma
Lei Wu
MLT
14
121
0
08 Apr 2019
Convergence rates for the stochastic gradient descent method for non-convex objective functions
Benjamin J. Fehrman
Benjamin Gess
Arnulf Jentzen
19
101
0
02 Apr 2019
On the Power and Limitations of Random Features for Understanding Neural Networks
Gilad Yehudai
Ohad Shamir
MLT
18
180
0
01 Apr 2019
Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
Mingchen Li
Mahdi Soltanolkotabi
Samet Oymak
NoLa
39
351
0
27 Mar 2019
General Probabilistic Surface Optimization and Log Density Estimation
Dmitry Kopitkov
Vadim Indelman
8
1
0
25 Mar 2019
Towards Characterizing Divergence in Deep Q-Learning
Joshua Achiam
Ethan Knight
Pieter Abbeel
19
96
0
21 Mar 2019
Surprises in High-Dimensional Ridgeless Least Squares Interpolation
Trevor Hastie
Andrea Montanari
Saharon Rosset
R. Tibshirani
31
728
0
19 Mar 2019
Stabilize Deep ResNet with A Sharp Scaling Factor
τ
τ
τ
Huishuai Zhang
Da Yu
Mingyang Yi
Wei Chen
Tie-Yan Liu
16
8
0
17 Mar 2019
Mean Field Analysis of Deep Neural Networks
Justin A. Sirignano
K. Spiliopoulos
6
82
0
11 Mar 2019
Function Space Particle Optimization for Bayesian Neural Networks
Ziyu Wang
Tongzheng Ren
Jun Zhu
Bo Zhang
BDL
23
63
0
26 Feb 2019
Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Jaehoon Lee
Lechao Xiao
S. Schoenholz
Yasaman Bahri
Roman Novak
Jascha Narain Sohl-Dickstein
Jeffrey Pennington
9
1,074
0
18 Feb 2019
Mean-field theory of two-layers neural networks: dimension-free bounds and kernel limit
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
22
275
0
16 Feb 2019
Scaling Limits of Wide Neural Networks with Weight Sharing: Gaussian Process Behavior, Gradient Independence, and Neural Tangent Kernel Derivation
Greg Yang
11
282
0
13 Feb 2019
Uniform convergence may be unable to explain generalization in deep learning
Vaishnavh Nagarajan
J. Zico Kolter
MoMe
AI4CE
7
309
0
13 Feb 2019
Mean Field Limit of the Learning Dynamics of Multilayer Neural Networks
Phan-Minh Nguyen
AI4CE
22
72
0
07 Feb 2019
Are All Layers Created Equal?
Chiyuan Zhang
Samy Bengio
Y. Singer
17
140
0
06 Feb 2019
Generalization Error Bounds of Gradient Descent for Learning Over-parameterized Deep ReLU Networks
Yuan Cao
Quanquan Gu
ODL
MLT
AI4CE
17
155
0
04 Feb 2019
Stiffness: A New Perspective on Generalization in Neural Networks
Stanislav Fort
Pawel Krzysztof Nowak
Stanislaw Jastrzebski
S. Narayanan
19
94
0
28 Jan 2019
Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs
D. Gilboa
B. Chang
Minmin Chen
Greg Yang
S. Schoenholz
Ed H. Chi
Jeffrey Pennington
34
39
0
25 Jan 2019
Fine-Grained Analysis of Optimization and Generalization for Overparameterized Two-Layer Neural Networks
Sanjeev Arora
S. Du
Wei Hu
Zhiyuan Li
Ruosong Wang
MLT
35
962
0
24 Jan 2019
Training Neural Networks as Learning Data-adaptive Kernels: Provable Representation and Approximation Benefits
Xialiang Dou
Tengyuan Liang
MLT
21
42
0
21 Jan 2019
A Theoretical Analysis of Deep Q-Learning
Jianqing Fan
Zhuoran Yang
Yuchen Xie
Zhaoran Wang
12
595
0
01 Jan 2019
On the Benefit of Width for Neural Networks: Disappearance of Bad Basins
Dawei Li
Tian Ding
Ruoyu Sun
29
37
0
28 Dec 2018
On Lazy Training in Differentiable Programming
Lénaïc Chizat
Edouard Oyallon
Francis R. Bach
21
805
0
19 Dec 2018
Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
Zeyuan Allen-Zhu
Yuanzhi Li
Yingyu Liang
MLT
9
765
0
12 Nov 2018
A Convergence Theory for Deep Learning via Over-Parameterization
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao-quan Song
AI4CE
ODL
17
1,446
0
09 Nov 2018
Gradient Descent Finds Global Minima of Deep Neural Networks
S. Du
J. Lee
Haochuan Li
Liwei Wang
M. Tomizuka
ODL
35
1,122
0
09 Nov 2018
On the Convergence Rate of Training Recurrent Neural Networks
Zeyuan Allen-Zhu
Yuanzhi Li
Zhao-quan Song
18
191
0
29 Oct 2018
A jamming transition from under- to over-parametrization affects loss landscape and generalization
S. Spigler
Mario Geiger
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
M. Wyart
25
151
0
22 Oct 2018
Exchangeability and Kernel Invariance in Trained MLPs
Russell Tsuchida
Fred Roosta
M. Gallagher
9
3
0
19 Oct 2018
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei
J. Lee
Qiang Liu
Tengyu Ma
20
243
0
12 Oct 2018
Information Geometry of Orthogonal Initializations and Training
Piotr A. Sokól
Il-Su Park
AI4CE
72
16
0
09 Oct 2018
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
S. Du
Xiyu Zhai
Barnabás Póczós
Aarti Singh
MLT
ODL
38
1,250
0
04 Oct 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
32
190
0
02 Oct 2018
Generalization Properties of hyper-RKHS and its Applications
Fanghui Liu
Lei Shi
Xiaolin Huang
Jie-jin Yang
Johan A. K. Suykens
13
4
0
26 Sep 2018
On Lipschitz Bounds of General Convolutional Neural Networks
Dongmian Zou
R. Balan
Maneesh Kumar Singh
16
54
0
04 Aug 2018
Spurious Local Minima of Deep ReLU Neural Networks in the Neural Tangent Kernel Regime
T. Nitta
13
0
0
13 Jun 2018
Spurious Valleys in Two-layer Neural Network Optimization Landscapes
Luca Venturi
Afonso S. Bandeira
Joan Bruna
19
74
0
18 Feb 2018
High-dimensional dynamics of generalization error in neural networks
Madhu S. Advani
Andrew M. Saxe
AI4CE
69
464
0
10 Oct 2017
Compressive Statistical Learning with Random Feature Moments
Rémi Gribonval
Gilles Blanchard
Nicolas Keriven
Y. Traonmilin
16
49
0
22 Jun 2017
Quantifying the probable approximation error of probabilistic inference programs
Marco F. Cusumano-Towner
Vikash K. Mansinghka
30
7
0
31 May 2016
New insights and perspectives on the natural gradient method
James Martens
ODL
17
602
0
03 Dec 2014
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
181
1,185
0
30 Nov 2014
Previous
1
2
3
...
41
42
43