ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07572
  4. Cited By
Neural Tangent Kernel: Convergence and Generalization in Neural Networks

Neural Tangent Kernel: Convergence and Generalization in Neural Networks

20 June 2018
Arthur Jacot
Franck Gabriel
Clément Hongler
ArXivPDFHTML

Papers citing "Neural Tangent Kernel: Convergence and Generalization in Neural Networks"

50 / 2,148 papers shown
Title
Scaling Limit of Neural Networks with the Xavier Initialization and
  Convergence to a Global Minimum
Scaling Limit of Neural Networks with the Xavier Initialization and Convergence to a Global Minimum
Justin A. Sirignano
K. Spiliopoulos
11
14
0
09 Jul 2019
Weight-space symmetry in deep networks gives rise to permutation
  saddles, connected by equal-loss valleys across the loss landscape
Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Johanni Brea
Berfin Simsek
Bernd Illing
W. Gerstner
23
55
0
05 Jul 2019
On Symmetry and Initialization for Neural Networks
On Symmetry and Initialization for Neural Networks
Ido Nachum
Amir Yehudayoff
MLT
20
5
0
01 Jul 2019
Benign Overfitting in Linear Regression
Benign Overfitting in Linear Regression
Peter L. Bartlett
Philip M. Long
Gábor Lugosi
Alexander Tsigler
MLT
6
762
0
26 Jun 2019
Neural Proximal/Trust Region Policy Optimization Attains Globally
  Optimal Policy
Neural Proximal/Trust Region Policy Optimization Attains Globally Optimal Policy
Boyi Liu
Qi Cai
Zhuoran Yang
Zhaoran Wang
22
108
0
25 Jun 2019
Limitations of Lazy Training of Two-layers Neural Networks
Limitations of Lazy Training of Two-layers Neural Networks
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
16
143
0
21 Jun 2019
The Functional Neural Process
The Functional Neural Process
Christos Louizos
Xiahan Shi
Klamer Schutte
Max Welling
BDL
38
77
0
19 Jun 2019
Disentangling feature and lazy training in deep neural networks
Disentangling feature and lazy training in deep neural networks
Mario Geiger
S. Spigler
Arthur Jacot
M. Wyart
15
17
0
19 Jun 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
J. Lee
AAML
13
107
0
19 Jun 2019
Gradient Dynamics of Shallow Univariate ReLU Networks
Gradient Dynamics of Shallow Univariate ReLU Networks
Francis Williams
Matthew Trager
Claudio Silva
Daniele Panozzo
Denis Zorin
Joan Bruna
11
78
0
18 Jun 2019
Dynamics of stochastic gradient descent for two-layer neural networks in
  the teacher-student setup
Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup
Sebastian Goldt
Madhu S. Advani
Andrew M. Saxe
Florent Krzakala
Lenka Zdeborová
MLT
19
140
0
18 Jun 2019
Meta-learning Pseudo-differential Operators with Deep Neural Networks
Meta-learning Pseudo-differential Operators with Deep Neural Networks
Jordi Feliu-Fabà
Yuwei Fan
Lexing Ying
16
39
0
16 Jun 2019
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Gradient Descent Maximizes the Margin of Homogeneous Neural Networks
Kaifeng Lyu
Jian Li
43
321
0
13 Jun 2019
Kernel and Rich Regimes in Overparametrized Models
Blake E. Woodworth
Suriya Gunasekar
Pedro H. P. Savarese
E. Moroshko
Itay Golan
J. Lee
Daniel Soudry
Nathan Srebro
19
352
0
13 Jun 2019
Generalization Guarantees for Neural Networks via Harnessing the
  Low-rank Structure of the Jacobian
Generalization Guarantees for Neural Networks via Harnessing the Low-rank Structure of the Jacobian
Samet Oymak
Zalan Fabian
Mingchen Li
Mahdi Soltanolkotabi
MLT
19
88
0
12 Jun 2019
Learning Curves for Deep Neural Networks: A Gaussian Field Theory
  Perspective
Learning Curves for Deep Neural Networks: A Gaussian Field Theory Perspective
Omry Cohen
Orit Malka
Z. Ringel
AI4CE
13
21
0
12 Jun 2019
An Improved Analysis of Training Over-parameterized Deep Neural Networks
An Improved Analysis of Training Over-parameterized Deep Neural Networks
Difan Zou
Quanquan Gu
21
231
0
11 Jun 2019
Quadratic Suffices for Over-parametrization via Matrix Chernoff Bound
Quadratic Suffices for Over-parametrization via Matrix Chernoff Bound
Zhao-quan Song
Xin Yang
13
91
0
09 Jun 2019
The Normalization Method for Alleviating Pathological Sharpness in Wide
  Neural Networks
The Normalization Method for Alleviating Pathological Sharpness in Wide Neural Networks
Ryo Karakida
S. Akaho
S. Amari
24
39
0
07 Jun 2019
Approximate Inference Turns Deep Networks into Gaussian Processes
Approximate Inference Turns Deep Networks into Gaussian Processes
Mohammad Emtiyaz Khan
Alexander Immer
Ehsan Abedi
M. Korzepa
UQCV
BDL
31
122
0
05 Jun 2019
Global Optimality Guarantees For Policy Gradient Methods
Global Optimality Guarantees For Policy Gradient Methods
Jalaj Bhandari
Daniel Russo
35
185
0
05 Jun 2019
Deep ReLU Networks Have Surprisingly Few Activation Patterns
Deep ReLU Networks Have Surprisingly Few Activation Patterns
Boris Hanin
David Rolnick
8
220
0
03 Jun 2019
A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth
  Trade-Off
A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off
Yaniv Blumenfeld
D. Gilboa
Daniel Soudry
MQ
13
14
0
03 Jun 2019
A mean-field limit for certain deep neural networks
A mean-field limit for certain deep neural networks
Dyego Araújo
R. Oliveira
Daniel Yukimura
AI4CE
11
69
0
01 Jun 2019
Exact Convergence Rates of the Neural Tangent Kernel in the Large Depth
  Limit
Exact Convergence Rates of the Neural Tangent Kernel in the Large Depth Limit
Soufiane Hayou
Arnaud Doucet
Judith Rousseau
16
4
0
31 May 2019
What Can Neural Networks Reason About?
What Can Neural Networks Reason About?
Keyulu Xu
Jingling Li
Mozhi Zhang
S. Du
Ken-ichi Kawarabayashi
Stefanie Jegelka
NAI
AI4CE
27
240
0
30 May 2019
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep
  Neural Networks
Generalization Bounds of Stochastic Gradient Descent for Wide and Deep Neural Networks
Yuan Cao
Quanquan Gu
MLT
AI4CE
12
382
0
30 May 2019
Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph
  Kernels
Graph Neural Tangent Kernel: Fusing Graph Neural Networks with Graph Kernels
S. Du
Kangcheng Hou
Barnabás Póczós
Ruslan Salakhutdinov
Ruosong Wang
Keyulu Xu
17
268
0
30 May 2019
Norm-based generalisation bounds for multi-class convolutional neural
  networks
Norm-based generalisation bounds for multi-class convolutional neural networks
Antoine Ledent
Waleed Mustafa
Yunwen Lei
Marius Kloft
18
5
0
29 May 2019
Geometric Insights into the Convergence of Nonlinear TD Learning
Geometric Insights into the Convergence of Nonlinear TD Learning
David Brandfonbrener
Joan Bruna
9
15
0
29 May 2019
On the Inductive Bias of Neural Tangent Kernels
On the Inductive Bias of Neural Tangent Kernels
A. Bietti
Julien Mairal
17
252
0
29 May 2019
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for
  Regression Problems
Gram-Gauss-Newton Method: Learning Overparameterized Neural Networks for Regression Problems
Tianle Cai
Ruiqi Gao
Jikai Hou
Siyu Chen
Dong Wang
Di He
Zhihua Zhang
Liwei Wang
ODL
21
57
0
28 May 2019
Simple and Effective Regularization Methods for Training on Noisily
  Labeled Data with Generalization Guarantee
Simple and Effective Regularization Methods for Training on Noisily Labeled Data with Generalization Guarantee
Wei Hu
Zhiyuan Li
Dingli Yu
NoLa
15
12
0
27 May 2019
Infinitely deep neural networks as diffusion processes
Infinitely deep neural networks as diffusion processes
Stefano Peluchetti
Stefano Favaro
ODL
6
31
0
27 May 2019
Scalable Training of Inference Networks for Gaussian-Process Models
Scalable Training of Inference Networks for Gaussian-Process Models
Jiaxin Shi
Mohammad Emtiyaz Khan
Jun Zhu
BDL
16
18
0
27 May 2019
Fast Convergence of Natural Gradient Descent for Overparameterized
  Neural Networks
Fast Convergence of Natural Gradient Descent for Overparameterized Neural Networks
Guodong Zhang
James Martens
Roger C. Grosse
ODL
22
124
0
27 May 2019
Temporal-difference learning with nonlinear function approximation: lazy
  training and mean field regimes
Temporal-difference learning with nonlinear function approximation: lazy training and mean field regimes
Andrea Agazzi
Jianfeng Lu
6
8
0
27 May 2019
Asymptotic learning curves of kernel methods: empirical data v.s.
  Teacher-Student paradigm
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
S. Spigler
Mario Geiger
M. Wyart
14
38
0
26 May 2019
On Learning Over-parameterized Neural Networks: A Functional
  Approximation Perspective
On Learning Over-parameterized Neural Networks: A Functional Approximation Perspective
Lili Su
Pengkun Yang
MLT
14
54
0
26 May 2019
What Can ResNet Learn Efficiently, Going Beyond Kernels?
What Can ResNet Learn Efficiently, Going Beyond Kernels?
Zeyuan Allen-Zhu
Yuanzhi Li
24
183
0
24 May 2019
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer
  Neural Networks
Explicitizing an Implicit Bias of the Frequency Principle in Two-layer Neural Networks
Yaoyu Zhang
Zhi-Qin John Xu
Tao Luo
Zheng Ma
MLT
AI4CE
14
38
0
24 May 2019
Neural Temporal-Difference and Q-Learning Provably Converge to Global
  Optima
Neural Temporal-Difference and Q-Learning Provably Converge to Global Optima
Qi Cai
Zhuoran Yang
Jason D. Lee
Zhaoran Wang
26
29
0
24 May 2019
Gradient Descent can Learn Less Over-parameterized Two-layer Neural
  Networks on Classification Problems
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
Atsushi Nitanda
Geoffrey Chinot
Taiji Suzuki
MLT
16
33
0
23 May 2019
A type of generalization error induced by initialization in deep neural
  networks
A type of generalization error induced by initialization in deep neural networks
Yaoyu Zhang
Zhi-Qin John Xu
Tao Luo
Zheng Ma
9
49
0
19 May 2019
An Information Theoretic Interpretation to Deep Neural Networks
An Information Theoretic Interpretation to Deep Neural Networks
Shao-Lun Huang
Xiangxiang Xu
Lizhong Zheng
G. Wornell
FAtt
22
41
0
16 May 2019
Do Kernel and Neural Embeddings Help in Training and Generalization?
Do Kernel and Neural Embeddings Help in Training and Generalization?
Arman Rahbar
Emilio Jorge
Devdatt Dubhashi
Morteza Haghir Chehreghani
MLT
8
0
0
13 May 2019
The Effect of Network Width on Stochastic Gradient Descent and
  Generalization: an Empirical Study
The Effect of Network Width on Stochastic Gradient Descent and Generalization: an Empirical Study
Daniel S. Park
Jascha Narain Sohl-Dickstein
Quoc V. Le
Samuel L. Smith
22
57
0
09 May 2019
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz
  Augmentation
Data-dependent Sample Complexity of Deep Neural Networks via Lipschitz Augmentation
Colin Wei
Tengyu Ma
17
109
0
09 May 2019
Similarity of Neural Network Representations Revisited
Similarity of Neural Network Representations Revisited
Simon Kornblith
Mohammad Norouzi
Honglak Lee
Geoffrey E. Hinton
32
1,354
0
01 May 2019
Linearized two-layers neural networks in high dimension
Linearized two-layers neural networks in high dimension
Behrooz Ghorbani
Song Mei
Theodor Misiakiewicz
Andrea Montanari
MLT
18
241
0
27 Apr 2019
Previous
123...414243
Next