ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1806.07572
  4. Cited By
Neural Tangent Kernel: Convergence and Generalization in Neural Networks

Neural Tangent Kernel: Convergence and Generalization in Neural Networks

20 June 2018
Arthur Jacot
Franck Gabriel
Clément Hongler
ArXivPDFHTML

Papers citing "Neural Tangent Kernel: Convergence and Generalization in Neural Networks"

50 / 2,148 papers shown
Title
How Much Over-parameterization Is Sufficient to Learn Deep ReLU
  Networks?
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Zixiang Chen
Yuan Cao
Difan Zou
Quanquan Gu
14
122
0
27 Nov 2019
Benefits of Jointly Training Autoencoders: An Improved Neural Tangent
  Kernel Analysis
Benefits of Jointly Training Autoencoders: An Improved Neural Tangent Kernel Analysis
THANH VAN NGUYEN
Raymond K. W. Wong
C. Hegde
16
12
0
27 Nov 2019
Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
Gating Revisited: Deep Multi-layer RNNs That Can Be Trained
Mehmet Özgür Türkoglu
Stefano Dáronco
Jan Dirk Wegner
Konrad Schindler
14
47
0
25 Nov 2019
Neural Networks Learning and Memorization with (almost) no
  Over-Parameterization
Neural Networks Learning and Memorization with (almost) no Over-Parameterization
Amit Daniely
8
33
0
22 Nov 2019
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
Information in Infinite Ensembles of Infinitely-Wide Neural Networks
Ravid Shwartz-Ziv
Alexander A. Alemi
19
21
0
20 Nov 2019
Implicit Regularization and Convergence for Weight Normalization
Implicit Regularization and Convergence for Weight Normalization
Xiaoxia Wu
Yan Sun
Tongzheng Ren
Shanshan Wu
Zhiyuan Li
Suriya Gunasekar
Rachel A. Ward
Qiang Liu
20
21
0
18 Nov 2019
Convex Formulation of Overparameterized Deep Neural Networks
Convex Formulation of Overparameterized Deep Neural Networks
Cong Fang
Yihong Gu
Weizhong Zhang
Tong Zhang
29
28
0
18 Nov 2019
Asymptotics of Reinforcement Learning with Neural Networks
Asymptotics of Reinforcement Learning with Neural Networks
Justin A. Sirignano
K. Spiliopoulos
MLT
8
14
0
13 Nov 2019
Neural Contextual Bandits with UCB-based Exploration
Neural Contextual Bandits with UCB-based Exploration
Dongruo Zhou
Lihong Li
Quanquan Gu
36
15
0
11 Nov 2019
How Implicit Regularization of ReLU Neural Networks Characterizes the
  Learned Function -- Part I: the 1-D Case of Two Layers with Random First
  Layer
How Implicit Regularization of ReLU Neural Networks Characterizes the Learned Function -- Part I: the 1-D Case of Two Layers with Random First Layer
Jakob Heiss
Josef Teichmann
Hanna Wutte
MLT
6
5
0
07 Nov 2019
Sub-Optimal Local Minima Exist for Neural Networks with Almost All
  Non-Linear Activations
Sub-Optimal Local Minima Exist for Neural Networks with Almost All Non-Linear Activations
Tian Ding
Dawei Li
Ruoyu Sun
8
13
0
04 Nov 2019
Mean-field inference methods for neural networks
Mean-field inference methods for neural networks
Marylou Gabrié
AI4CE
16
33
0
03 Nov 2019
Enhanced Convolutional Neural Tangent Kernels
Enhanced Convolutional Neural Tangent Kernels
Zhiyuan Li
Ruosong Wang
Dingli Yu
S. Du
Wei Hu
Ruslan Salakhutdinov
Sanjeev Arora
16
131
0
03 Nov 2019
Gaussian-Spherical Restricted Boltzmann Machines
Gaussian-Spherical Restricted Boltzmann Machines
A. Decelle
Cyril Furtlehner
19
5
0
31 Oct 2019
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any
  Architecture are Gaussian Processes
Tensor Programs I: Wide Feedforward or Recurrent Neural Networks of Any Architecture are Gaussian Processes
Greg Yang
33
190
0
28 Oct 2019
Learning Boolean Circuits with Neural Networks
Learning Boolean Circuits with Neural Networks
Eran Malach
Shai Shalev-Shwartz
9
4
0
25 Oct 2019
Multi-scale Deep Neural Networks for Solving High Dimensional PDEs
Multi-scale Deep Neural Networks for Solving High Dimensional PDEs
Wei Cai
Zhi-Qin John Xu
AI4CE
12
38
0
25 Oct 2019
Over Parameterized Two-level Neural Networks Can Learn Near Optimal
  Feature Representations
Over Parameterized Two-level Neural Networks Can Learn Near Optimal Feature Representations
Cong Fang
Hanze Dong
Tong Zhang
12
18
0
25 Oct 2019
Neural Spectrum Alignment: Empirical Study
Neural Spectrum Alignment: Empirical Study
Dmitry Kopitkov
Vadim Indelman
27
14
0
19 Oct 2019
Why bigger is not always better: on finite and infinite neural networks
Why bigger is not always better: on finite and infinite neural networks
Laurence Aitchison
175
51
0
17 Oct 2019
The Renyi Gaussian Process: Towards Improved Generalization
The Renyi Gaussian Process: Towards Improved Generalization
Xubo Yue
Raed Al Kontar
101
3
0
15 Oct 2019
Neural tangent kernels, transportation mappings, and universal
  approximation
Neural tangent kernels, transportation mappings, and universal approximation
Ziwei Ji
Matus Telgarsky
Ruicheng Xian
8
39
0
15 Oct 2019
Pathological spectra of the Fisher information metric and its variants
  in deep neural networks
Pathological spectra of the Fisher information metric and its variants in deep neural networks
Ryo Karakida
S. Akaho
S. Amari
24
27
0
14 Oct 2019
Emergent properties of the local geometry of neural loss landscapes
Emergent properties of the local geometry of neural loss landscapes
Stanislav Fort
Surya Ganguli
14
50
0
14 Oct 2019
Large Deviation Analysis of Function Sensitivity in Random Deep Neural
  Networks
Large Deviation Analysis of Function Sensitivity in Random Deep Neural Networks
Bo Li
D. Saad
11
12
0
13 Oct 2019
On the expected behaviour of noise regularised deep neural networks as
  Gaussian processes
On the expected behaviour of noise regularised deep neural networks as Gaussian processes
Arnu Pretorius
Herman Kamper
Steve Kroon
16
9
0
12 Oct 2019
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks
Sanjeev Arora
S. Du
Zhiyuan Li
Ruslan Salakhutdinov
Ruosong Wang
Dingli Yu
AAML
14
161
0
03 Oct 2019
Beyond Linearization: On Quadratic and Higher-Order Approximation of
  Wide Neural Networks
Beyond Linearization: On Quadratic and Higher-Order Approximation of Wide Neural Networks
Yu Bai
J. Lee
24
116
0
03 Oct 2019
Distillation $\approx$ Early Stopping? Harvesting Dark Knowledge
  Utilizing Anisotropic Information Retrieval For Overparameterized Neural
  Network
Distillation ≈\approx≈ Early Stopping? Harvesting Dark Knowledge Utilizing Anisotropic Information Retrieval For Overparameterized Neural Network
Bin Dong
Jikai Hou
Yiping Lu
Zhihua Zhang
12
40
0
02 Oct 2019
Truth or Backpropaganda? An Empirical Investigation of Deep Learning
  Theory
Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory
Micah Goldblum
Jonas Geiping
Avi Schwarzschild
Michael Moeller
Tom Goldstein
13
32
0
01 Oct 2019
The asymptotic spectrum of the Hessian of DNN throughout training
The asymptotic spectrum of the Hessian of DNN throughout training
Arthur Jacot
Franck Gabriel
Clément Hongler
15
34
0
01 Oct 2019
Non-Gaussian processes and neural networks at finite widths
Non-Gaussian processes and neural networks at finite widths
Sho Yaida
8
88
0
30 Sep 2019
Student Specialization in Deep ReLU Networks With Finite Width and Input
  Dimension
Student Specialization in Deep ReLU Networks With Finite Width and Input Dimension
Yuandong Tian
MLT
14
8
0
30 Sep 2019
Overparameterized Neural Networks Implement Associative Memory
Overparameterized Neural Networks Implement Associative Memory
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
BDL
32
71
0
26 Sep 2019
Polylogarithmic width suffices for gradient descent to achieve
  arbitrarily small test error with shallow ReLU networks
Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks
Ziwei Ji
Matus Telgarsky
19
177
0
26 Sep 2019
Mildly Overparametrized Neural Nets can Memorize Training Data
  Efficiently
Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently
Rong Ge
Runzhe Wang
Haoyu Zhao
TDI
18
20
0
26 Sep 2019
Asymptotics of Wide Networks from Feynman Diagrams
Asymptotics of Wide Networks from Feynman Diagrams
Ethan Dyer
Guy Gur-Ari
24
113
0
25 Sep 2019
Dynamics of Deep Neural Networks and Neural Tangent Hierarchy
Dynamics of Deep Neural Networks and Neural Tangent Hierarchy
Jiaoyang Huang
H. Yau
17
146
0
18 Sep 2019
Finite Depth and Width Corrections to the Neural Tangent Kernel
Finite Depth and Width Corrections to the Neural Tangent Kernel
Boris Hanin
Mihai Nica
MDE
22
150
0
13 Sep 2019
Additive function approximation in the brain
Additive function approximation in the brain
K. Harris
12
13
0
05 Sep 2019
Neural Policy Gradient Methods: Global Optimality and Rates of
  Convergence
Neural Policy Gradient Methods: Global Optimality and Rates of Convergence
Lingxiao Wang
Qi Cai
Zhuoran Yang
Zhaoran Wang
14
236
0
29 Aug 2019
Deep Learning Theory Review: An Optimal Control and Dynamical Systems
  Perspective
Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective
Guan-Horng Liu
Evangelos A. Theodorou
AI4CE
19
71
0
28 Aug 2019
On the Multiple Descent of Minimum-Norm Interpolants and Restricted
  Lower Isometry of Kernels
On the Multiple Descent of Minimum-Norm Interpolants and Restricted Lower Isometry of Kernels
Tengyuan Liang
Alexander Rakhlin
Xiyu Zhai
18
29
0
27 Aug 2019
Finite size corrections for neural network Gaussian processes
Finite size corrections for neural network Gaussian processes
J. Antognini
BDL
19
29
0
27 Aug 2019
Effect of Activation Functions on the Training of Overparametrized
  Neural Nets
Effect of Activation Functions on the Training of Overparametrized Neural Nets
A. Panigrahi
Abhishek Shetty
Navin Goyal
11
20
0
16 Aug 2019
The generalization error of random features regression: Precise
  asymptotics and double descent curve
The generalization error of random features regression: Precise asymptotics and double descent curve
Song Mei
Andrea Montanari
51
626
0
14 Aug 2019
A Fine-Grained Spectral Perspective on Neural Networks
A Fine-Grained Spectral Perspective on Neural Networks
Greg Yang
Hadi Salman
27
110
0
24 Jul 2019
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Sparse Optimization on Measures with Over-parameterized Gradient Descent
Lénaïc Chizat
21
92
0
24 Jul 2019
Order and Chaos: NTK views on DNN Normalization, Checkerboard and
  Boundary Artifacts
Order and Chaos: NTK views on DNN Normalization, Checkerboard and Boundary Artifacts
Arthur Jacot
Franck Gabriel
François Ged
Clément Hongler
11
23
0
11 Jul 2019
Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a
  Noisy Quadratic Model
Which Algorithmic Choices Matter at Which Batch Sizes? Insights From a Noisy Quadratic Model
Guodong Zhang
Lala Li
Zachary Nado
James Martens
Sushant Sachdeva
George E. Dahl
Christopher J. Shallue
Roger C. Grosse
20
148
0
09 Jul 2019
Previous
123...40414243
Next