ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07572
  4. Cited By
Neural Tangent Kernel: Convergence and Generalization in Neural Networks

Neural Tangent Kernel: Convergence and Generalization in Neural Networks

20 June 2018
Arthur Jacot
Franck Gabriel
Clément Hongler
ArXivPDFHTML

Papers citing "Neural Tangent Kernel: Convergence and Generalization in Neural Networks"

50 / 2,148 papers shown
Title
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite
  Networks
Avoiding Kernel Fixed Points: Computing with ELU and GELU Infinite Networks
Russell Tsuchida
Tim Pearce
Christopher van der Heide
Fred Roosta
M. Gallagher
6
8
0
20 Feb 2020
Implicit Regularization of Random Feature Models
Implicit Regularization of Random Feature Models
Arthur Jacot
Berfin Simsek
Francesco Spadaro
Clément Hongler
Franck Gabriel
31
82
0
19 Feb 2020
Deep regularization and direct training of the inner layers of Neural
  Networks with Kernel Flows
Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows
G. Yoo
H. Owhadi
22
21
0
19 Feb 2020
Global Convergence of Deep Networks with One Wide Layer Followed by
  Pyramidal Topology
Global Convergence of Deep Networks with One Wide Layer Followed by Pyramidal Topology
Quynh N. Nguyen
Marco Mondelli
ODL
AI4CE
21
67
0
18 Feb 2020
Learning Parities with Neural Networks
Learning Parities with Neural Networks
Amit Daniely
Eran Malach
24
76
0
18 Feb 2020
Convergence of End-to-End Training in Deep Unsupervised Contrastive
  Learning
Convergence of End-to-End Training in Deep Unsupervised Contrastive Learning
Zixin Wen
SSL
21
2
0
17 Feb 2020
$π$VAE: a stochastic process prior for Bayesian deep learning with
  MCMC
πππVAE: a stochastic process prior for Bayesian deep learning with MCMC
Swapnil Mishra
Seth Flaxman
Tresnia Berah
Harrison Zhu
Mikko S. Pakkanen
Samir Bhatt
BDL
13
3
0
17 Feb 2020
Over-parameterized Adversarial Training: An Analysis Overcoming the
  Curse of Dimensionality
Over-parameterized Adversarial Training: An Analysis Overcoming the Curse of Dimensionality
Yi Zhang
Orestis Plevrakis
S. Du
Xingguo Li
Zhao-quan Song
Sanjeev Arora
21
51
0
16 Feb 2020
Why Do Deep Residual Networks Generalize Better than Deep Feedforward
  Networks? -- A Neural Tangent Kernel Perspective
Why Do Deep Residual Networks Generalize Better than Deep Feedforward Networks? -- A Neural Tangent Kernel Perspective
Kaixuan Huang
Yuqing Wang
Molei Tao
T. Zhao
MLT
14
96
0
14 Feb 2020
Self-Distillation Amplifies Regularization in Hilbert Space
Self-Distillation Amplifies Regularization in Hilbert Space
H. Mobahi
Mehrdad Farajtabar
Peter L. Bartlett
19
226
0
13 Feb 2020
Regret Bounds for Noise-Free Kernel-Based Bandits
Regret Bounds for Noise-Free Kernel-Based Bandits
Sattar Vakili
25
3
0
12 Feb 2020
Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent
David Holzmüller
Ingo Steinwart
MLT
8
9
0
12 Feb 2020
Average-case Acceleration Through Spectral Density Estimation
Average-case Acceleration Through Spectral Density Estimation
Fabian Pedregosa
Damien Scieur
14
12
0
12 Feb 2020
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks
  Trained with the Logistic Loss
Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Lénaïc Chizat
Francis R. Bach
MLT
21
327
0
11 Feb 2020
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural
  Networks
A Generalized Neural Tangent Kernel Analysis for Two-layer Neural Networks
Zixiang Chen
Yuan Cao
Quanquan Gu
Tong Zhang
MLT
29
10
0
10 Feb 2020
Taylorized Training: Towards Better Approximation of Neural Network
  Training at Finite Width
Taylorized Training: Towards Better Approximation of Neural Network Training at Finite Width
Yu Bai
Ben Krause
Huan Wang
Caiming Xiong
R. Socher
8
22
0
10 Feb 2020
Distribution Approximation and Statistical Estimation Guarantees of
  Generative Adversarial Networks
Distribution Approximation and Statistical Estimation Guarantees of Generative Adversarial Networks
Minshuo Chen
Wenjing Liao
H. Zha
Tuo Zhao
26
15
0
10 Feb 2020
Characterizing Structural Regularities of Labeled Data in
  Overparameterized Models
Characterizing Structural Regularities of Labeled Data in Overparameterized Models
Ziheng Jiang
Chiyuan Zhang
Kunal Talwar
Michael C. Mozer
TDI
17
97
0
08 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OOD
AAML
8
155
0
07 Feb 2020
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Thomas Baumhauer
Pascal Schöttle
Matthias Zeppelzauer
MU
109
130
0
07 Feb 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
Cengiz Pehlevan
144
201
0
07 Feb 2020
Quasi-Equivalence of Width and Depth of Neural Networks
Quasi-Equivalence of Width and Depth of Neural Networks
Fenglei Fan
Rongjie Lai
Ge Wang
22
11
0
06 Feb 2020
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Albert Senen-Cerda
J. Sanders
21
8
0
06 Feb 2020
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization
Minimax Value Interval for Off-Policy Evaluation and Policy Optimization
Nan Jiang
Jiawei Huang
OffRL
28
17
0
06 Feb 2020
A Deep Conditioning Treatment of Neural Networks
A Deep Conditioning Treatment of Neural Networks
Naman Agarwal
Pranjal Awasthi
Satyen Kale
AI4CE
25
15
0
04 Feb 2020
A Corrective View of Neural Networks: Representation, Memorization and
  Learning
A Corrective View of Neural Networks: Representation, Memorization and Learning
Guy Bresler
Dheeraj M. Nagaraj
MLT
18
18
0
01 Feb 2020
Gating creates slow modes and controls phase-space complexity in GRUs
  and LSTMs
Gating creates slow modes and controls phase-space complexity in GRUs and LSTMs
T. Can
K. Krishnamurthy
D. Schwab
AI4CE
6
17
0
31 Jan 2020
A Rigorous Framework for the Mean Field Limit of Multilayer Neural
  Networks
A Rigorous Framework for the Mean Field Limit of Multilayer Neural Networks
Phan-Minh Nguyen
H. Pham
AI4CE
24
81
0
30 Jan 2020
On Random Kernels of Residual Architectures
On Random Kernels of Residual Architectures
Etai Littwin
Tomer Galanti
Lior Wolf
6
4
0
28 Jan 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
261
4,489
0
23 Jan 2020
On the infinite width limit of neural networks with a standard
  parameterization
On the infinite width limit of neural networks with a standard parameterization
Jascha Narain Sohl-Dickstein
Roman Novak
S. Schoenholz
Jaehoon Lee
26
47
0
21 Jan 2020
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide
  Random Network: A Geometrical Perspective
Any Target Function Exists in a Neighborhood of Any Sufficiently Wide Random Network: A Geometrical Perspective
S. Amari
27
12
0
20 Jan 2020
Deep Network Approximation for Smooth Functions
Deep Network Approximation for Smooth Functions
Jianfeng Lu
Zuowei Shen
Haizhao Yang
Shijun Zhang
67
247
0
09 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAML
AI4CE
38
300
0
08 Jan 2020
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating
  Decreasing Paths to Infinity
Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
Shiyu Liang
Ruoyu Sun
R. Srikant
35
19
0
31 Dec 2019
Disentangling Trainability and Generalization in Deep Neural Networks
Disentangling Trainability and Generalization in Deep Neural Networks
Lechao Xiao
Jeffrey Pennington
S. Schoenholz
6
34
0
30 Dec 2019
Deep Graph Similarity Learning: A Survey
Deep Graph Similarity Learning: A Survey
Guixiang Ma
Nesreen Ahmed
Theodore L. Willke
Philip S. Yu
GNN
21
77
0
25 Dec 2019
Landscape Connectivity and Dropout Stability of SGD Solutions for
  Over-parameterized Neural Networks
Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks
A. Shevchenko
Marco Mondelli
21
37
0
20 Dec 2019
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
19
168
0
19 Dec 2019
Analytic expressions for the output evolution of a deep neural network
Analytic expressions for the output evolution of a deep neural network
Anastasia Borovykh
6
0
0
18 Dec 2019
Frivolous Units: Wider Networks Are Not Really That Wide
Frivolous Units: Wider Networks Are Not Really That Wide
Stephen Casper
Xavier Boix
Vanessa D’Amario
Ling Guo
Martin Schrimpf
Kasper Vinken
Gabriel Kreiman
20
19
0
10 Dec 2019
A Finite-Time Analysis of Q-Learning with Neural Network Function
  Approximation
A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation
Pan Xu
Quanquan Gu
13
66
0
10 Dec 2019
A priori generalization error for two-layer ReLU neural network through minimum norm solution
Zhi-Qin John Xu
Jiwei Zhang
Yaoyu Zhang
Chengchao Zhao
MLT
6
1
0
06 Dec 2019
Observational Overfitting in Reinforcement Learning
Observational Overfitting in Reinforcement Learning
Xingyou Song
Yiding Jiang
Stephen Tu
Yilun Du
Behnam Neyshabur
OffRL
33
138
0
06 Dec 2019
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Roman Novak
Lechao Xiao
Jiri Hron
Jaehoon Lee
Alexander A. Alemi
Jascha Narain Sohl-Dickstein
S. Schoenholz
27
224
0
05 Dec 2019
Towards Understanding the Spectral Bias of Deep Learning
Towards Understanding the Spectral Bias of Deep Learning
Yuan Cao
Zhiying Fang
Yue Wu
Ding-Xuan Zhou
Quanquan Gu
32
214
0
03 Dec 2019
Variable Selection with Rigorous Uncertainty Quantification using Deep
  Bayesian Neural Networks: Posterior Concentration and Bernstein-von Mises
  Phenomenon
Variable Selection with Rigorous Uncertainty Quantification using Deep Bayesian Neural Networks: Posterior Concentration and Bernstein-von Mises Phenomenon
Jeremiah Zhe Liu
BDL
15
8
0
03 Dec 2019
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep
  Learning
A Random Matrix Perspective on Mixtures of Nonlinearities for Deep Learning
Ben Adlam
J. Levinson
Jeffrey Pennington
27
23
0
02 Dec 2019
On the optimality of kernels for high-dimensional clustering
On the optimality of kernels for high-dimensional clustering
L. C. Vankadara
D. Ghoshdastidar
26
11
0
01 Dec 2019
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep
  Neural Networks
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks
Umut Simsekli
Mert Gurbuzbalaban
T. H. Nguyen
G. Richard
Levent Sagun
23
55
0
29 Nov 2019
Previous
123...3940414243
Next