ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.10683
  4. Cited By
Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS

Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS

22 September 2020
Lin Chen
Sheng Xu
ArXivPDFHTML

Papers citing "Deep Neural Tangent Kernel and Laplace Kernel Have the Same RKHS"

50 / 69 papers shown
Title
Feature maps for the Laplacian kernel and its generalizations
Feature maps for the Laplacian kernel and its generalizations
Sudhendu Ahir
Parthe Pandit
55
0
0
24 Feb 2025
A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis
A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis
Akash Kumar
Rahul Parhi
Mikhail Belkin
41
0
0
22 Feb 2025
Which Spaces can be Embedded in $L_p$-type Reproducing Kernel Banach
  Space? A Characterization via Metric Entropy
Which Spaces can be Embedded in LpL_pLp​-type Reproducing Kernel Banach Space? A Characterization via Metric Entropy
Yiping Lu
Daozhe Lin
Qiang Du
34
0
0
14 Oct 2024
Variational Search Distributions
Variational Search Distributions
Daniel M. Steinberg
Rafael Oliveira
Cheng Soon Ong
Edwin V. Bonilla
33
0
0
10 Sep 2024
Approximation and Gradient Descent Training with Neural Networks
Approximation and Gradient Descent Training with Neural Networks
G. Welper
36
1
0
19 May 2024
Nonparametric Teaching of Implicit Neural Representations
Nonparametric Teaching of Implicit Neural Representations
Chen Zhang
Steven Tin Sui Luo
Jason Chun Lok Li
Yik-Chung Wu
Ngai Wong
38
2
0
17 May 2024
Multi-layer random features and the approximation power of neural
  networks
Multi-layer random features and the approximation power of neural networks
Rustem Takhanov
27
1
0
26 Apr 2024
Laplace-HDC: Understanding the geometry of binary hyperdimensional
  computing
Laplace-HDC: Understanding the geometry of binary hyperdimensional computing
Saeid Pourmand
Wyatt D. Whiting
Alireza Aghasi
Nicholas F. Marshall
35
1
0
16 Apr 2024
Generalization in Kernel Regression Under Realistic Assumptions
Generalization in Kernel Regression Under Realistic Assumptions
Daniel Barzilai
Ohad Shamir
29
14
0
26 Dec 2023
How many Neurons do we need? A refined Analysis for Shallow Networks
  trained with Gradient Descent
How many Neurons do we need? A refined Analysis for Shallow Networks trained with Gradient Descent
Mike Nguyen
Nicole Mücke
MLT
16
5
0
14 Sep 2023
Approximation Results for Gradient Descent trained Neural Networks
Approximation Results for Gradient Descent trained Neural Networks
G. Welper
48
0
0
09 Sep 2023
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Zhengdao Chen
30
1
0
03 Jul 2023
Benign Overfitting in Deep Neural Networks under Lazy Training
Benign Overfitting in Deep Neural Networks under Lazy Training
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
Francesco Locatello
V. Cevher
AI4CE
18
10
0
30 May 2023
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
35
13
0
23 May 2023
Deep Learning with Kernels through RKHM and the Perron-Frobenius
  Operator
Deep Learning with Kernels through RKHM and the Perron-Frobenius Operator
Yuka Hashimoto
Masahiro Ikeda
Hachem Kadri
27
8
0
23 May 2023
ReLU soothes the NTK condition number and accelerates optimization for
  wide neural networks
ReLU soothes the NTK condition number and accelerates optimization for wide neural networks
Chaoyue Liu
Like Hui
MLT
17
9
0
15 May 2023
Random Smoothing Regularization in Kernel Gradient Descent Learning
Random Smoothing Regularization in Kernel Gradient Descent Learning
Liang Ding
Tianyang Hu
Jiahan Jiang
Donghao Li
Wenjia Wang
Yuan Yao
18
6
0
05 May 2023
On the Eigenvalue Decay Rates of a Class of Neural-Network Related
  Kernel Functions Defined on General Domains
On the Eigenvalue Decay Rates of a Class of Neural-Network Related Kernel Functions Defined on General Domains
Yicheng Li
Zixiong Yu
Y. Cotronis
Qian Lin
55
13
0
04 May 2023
Adaptation to Misspecified Kernel Regularity in Kernelised Bandits
Adaptation to Misspecified Kernel Regularity in Kernelised Bandits
Yusha Liu
Aarti Singh
13
1
0
26 Apr 2023
Contrastive Learning Is Spectral Clustering On Similarity Graph
Contrastive Learning Is Spectral Clustering On Similarity Graph
Zhi-Hao Tan
Yifan Zhang
Jingqin Yang
Yang Yuan
SSL
54
18
0
27 Mar 2023
On Statistical Properties of Sharpness-Aware Minimization: Provable
  Guarantees
On Statistical Properties of Sharpness-Aware Minimization: Provable Guarantees
Kayhan Behdin
Rahul Mazumder
27
6
0
23 Feb 2023
A Kernel Perspective of Skip Connections in Convolutional Networks
A Kernel Perspective of Skip Connections in Convolutional Networks
Daniel Barzilai
Amnon Geifman
Meirav Galun
Ronen Basri
17
11
0
27 Nov 2022
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width
  Bayesian Neural Networks
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks
Jiayu Yao
Yaniv Yacoby
Beau Coker
Weiwei Pan
Finale Doshi-Velez
19
1
0
16 Nov 2022
Regularized Stein Variational Gradient Flow
Regularized Stein Variational Gradient Flow
Ye He
Krishnakumar Balasubramanian
Bharath K. Sriperumbudur
Jianfeng Lu
OT
34
11
0
15 Nov 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
30
11
0
15 Nov 2022
Few-shot Backdoor Attacks via Neural Tangent Kernels
Few-shot Backdoor Attacks via Neural Tangent Kernels
J. Hayase
Sewoong Oh
30
21
0
12 Oct 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
46
6
0
17 Sep 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
39
19
0
15 Sep 2022
On the Trade-Off between Actionable Explanations and the Right to be
  Forgotten
On the Trade-Off between Actionable Explanations and the Right to be Forgotten
Martin Pawelczyk
Tobias Leemann
Asia J. Biega
Gjergji Kasneci
FaML
MU
24
23
0
30 Aug 2022
Neural Tangent Kernel: A Survey
Neural Tangent Kernel: A Survey
Eugene Golikov
Eduard Pokonechnyy
Vladimir Korviakov
15
12
0
29 Aug 2022
Kernel Memory Networks: A Unifying Framework for Memory Modeling
Kernel Memory Networks: A Unifying Framework for Memory Modeling
Georgios Iatropoulos
Johanni Brea
W. Gerstner
18
9
0
19 Aug 2022
A Sublinear Adversarial Training Algorithm
A Sublinear Adversarial Training Algorithm
Yeqi Gao
Lianke Qin
Zhao-quan Song
Yitan Wang
GAN
24
25
0
10 Aug 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Bounding the Width of Neural Networks via Coupled Initialization -- A
  Worst Case Analysis
Bounding the Width of Neural Networks via Coupled Initialization -- A Worst Case Analysis
Alexander Munteanu
Simon Omlor
Zhao-quan Song
David P. Woodruff
27
15
0
26 Jun 2022
Why Quantization Improves Generalization: NTK of Binary Weight Neural
  Networks
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Kaiqi Zhang
Ming Yin
Yu-Xiang Wang
MQ
16
4
0
13 Jun 2022
Bandwidth Selection for Gaussian Kernel Ridge Regression via Jacobian
  Control
Bandwidth Selection for Gaussian Kernel Ridge Regression via Jacobian Control
Oskar Allerbo
Rebecka Jörnsten
16
2
0
24 May 2022
Sobolev Acceleration and Statistical Optimality for Learning Elliptic
  Equations via Gradient Descent
Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent
Yiping Lu
Jose H. Blanchet
Lexing Ying
32
7
0
15 May 2022
On the Spectral Bias of Convolutional Neural Tangent and Gaussian
  Process Kernels
On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels
Amnon Geifman
Meirav Galun
David Jacobs
Ronen Basri
21
13
0
17 Mar 2022
Cascaded Gaps: Towards Gap-Dependent Regret for Risk-Sensitive
  Reinforcement Learning
Cascaded Gaps: Towards Gap-Dependent Regret for Risk-Sensitive Reinforcement Learning
Yingjie Fei
Ruitu Xu
27
5
0
07 Mar 2022
Deep Learning in Random Neural Fields: Numerical Experiments via Neural
  Tangent Kernel
Deep Learning in Random Neural Fields: Numerical Experiments via Neural Tangent Kernel
Kaito Watanabe
Kotaro Sakamoto
Ryo Karakida
Sho Sonoda
S. Amari
OOD
14
1
0
10 Feb 2022
Learning Representation from Neural Fisher Kernel with Low-rank
  Approximation
Learning Representation from Neural Fisher Kernel with Low-rank Approximation
Ruixiang Zhang
Shuangfei Zhai
Etai Littwin
J. Susskind
SSL
28
3
0
04 Feb 2022
A Generalized Weighted Optimization Method for Computational Learning
  and Inversion
A Generalized Weighted Optimization Method for Computational Learning and Inversion
Bjorn Engquist
Kui Ren
Yunan Yang
23
4
0
23 Jan 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
30
0
0
03 Jan 2022
Understanding Square Loss in Training Overparametrized Neural Network
  Classifiers
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wenjia Wang
Zhenguo Li
UQCV
AAML
33
19
0
07 Dec 2021
Learning Curves for Continual Learning in Neural Networks:
  Self-Knowledge Transfer and Forgetting
Learning Curves for Continual Learning in Neural Networks: Self-Knowledge Transfer and Forgetting
Ryo Karakida
S. Akaho
CLL
24
11
0
03 Dec 2021
Understanding Layer-wise Contributions in Deep Neural Networks through
  Spectral Analysis
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Yatin Dandi
Arthur Jacot
FAtt
21
4
0
06 Nov 2021
Exponential Bellman Equation and Improved Regret Bounds for
  Risk-Sensitive Reinforcement Learning
Exponential Bellman Equation and Improved Regret Bounds for Risk-Sensitive Reinforcement Learning
Yingjie Fei
Zhuoran Yang
Yudong Chen
Zhaoran Wang
33
46
0
06 Nov 2021
Uniform Generalization Bounds for Overparameterized Neural Networks
Uniform Generalization Bounds for Overparameterized Neural Networks
Sattar Vakili
Michael Bromberg
Jezabel R. Garcia
Da-shan Shiu
A. Bernacchia
17
19
0
13 Sep 2021
A spectral-based analysis of the separation between two-layer neural
  networks and linear methods
A spectral-based analysis of the separation between two-layer neural networks and linear methods
Lei Wu
Jihao Long
16
8
0
10 Aug 2021
Deep Networks Provably Classify Data on Curves
Deep Networks Provably Classify Data on Curves
Tingran Wang
Sam Buchanan
D. Gilboa
John N. Wright
23
9
0
29 Jul 2021
12
Next