ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12173
  4. Cited By
On the Inductive Bias of Neural Tangent Kernels

On the Inductive Bias of Neural Tangent Kernels

29 May 2019
A. Bietti
Julien Mairal
ArXivPDFHTML

Papers citing "On the Inductive Bias of Neural Tangent Kernels"

50 / 69 papers shown
Title
Sobolev norm inconsistency of kernel interpolation
Sobolev norm inconsistency of kernel interpolation
Yunfei Yang
34
0
0
29 Apr 2025
Generalization through variance: how noise shapes inductive biases in diffusion models
Generalization through variance: how noise shapes inductive biases in diffusion models
John J. Vastola
DiffM
182
2
0
16 Apr 2025
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Devon Jarvis
Richard Klein
Benjamin Rosman
Andrew M. Saxe
MLT
66
1
0
08 Mar 2025
On the Impacts of the Random Initialization in the Neural Tangent Kernel
  Theory
On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory
Guhan Chen
Yicheng Li
Qian Lin
AAML
38
1
0
08 Oct 2024
Tuning Frequency Bias of State Space Models
Tuning Frequency Bias of State Space Models
Annan Yu
Dongwei Lyu
S. H. Lim
Michael W. Mahoney
N. Benjamin Erichson
47
3
0
02 Oct 2024
Graph Neural Thompson Sampling
Graph Neural Thompson Sampling
Shuang Wu
Arash A. Amini
51
0
0
15 Jun 2024
Learning Beyond Pattern Matching? Assaying Mathematical Understanding in
  LLMs
Learning Beyond Pattern Matching? Assaying Mathematical Understanding in LLMs
Siyuan Guo
Aniket Didolkar
Nan Rosemary Ke
Anirudh Goyal
Ferenc Huszár
Bernhard Schölkopf
52
3
0
24 May 2024
U-Nets as Belief Propagation: Efficient Classification, Denoising, and
  Diffusion in Generative Hierarchical Models
U-Nets as Belief Propagation: Efficient Classification, Denoising, and Diffusion in Generative Hierarchical Models
Song Mei
3DV
AI4CE
DiffM
43
11
0
29 Apr 2024
On the Eigenvalue Decay Rates of a Class of Neural-Network Related
  Kernel Functions Defined on General Domains
On the Eigenvalue Decay Rates of a Class of Neural-Network Related Kernel Functions Defined on General Domains
Yicheng Li
Zixiong Yu
Y. Cotronis
Qian Lin
55
13
0
04 May 2023
Analyzing Convergence in Quantum Neural Networks: Deviations from Neural
  Tangent Kernels
Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels
Xuchen You
Shouvanik Chakrabarti
Boyang Chen
Xiaodi Wu
34
10
0
26 Mar 2023
Convolutional Neural Networks as 2-D systems
Convolutional Neural Networks as 2-D systems
Dennis Gramlich
Patricia Pauli
C. Scherer
Frank Allgöwer
C. Ebenbauer
3DV
33
8
0
06 Mar 2023
VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function
  Approximation
VIPeR: Provably Efficient Algorithm for Offline RL with Neural Function Approximation
Thanh Nguyen-Tang
R. Arora
OffRL
46
5
0
24 Feb 2023
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
The SSL Interplay: Augmentations, Inductive Bias, and Generalization
Vivien A. Cabannes
B. Kiani
Randall Balestriero
Yann LeCun
A. Bietti
SSL
19
31
0
06 Feb 2023
A Comprehensive Survey of Dataset Distillation
A Comprehensive Survey of Dataset Distillation
Shiye Lei
Dacheng Tao
DD
31
87
0
13 Jan 2023
Learning Lipschitz Functions by GD-trained Shallow Overparameterized
  ReLU Neural Networks
Learning Lipschitz Functions by GD-trained Shallow Overparameterized ReLU Neural Networks
Ilja Kuzborskij
Csaba Szepesvári
21
4
0
28 Dec 2022
Graph Neural Networks are Inherently Good Generalizers: Insights by
  Bridging GNNs and MLPs
Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs
Chenxiao Yang
Qitian Wu
Jiahua Wang
Junchi Yan
AI4CE
19
51
0
18 Dec 2022
Leveraging Unlabeled Data to Track Memorization
Leveraging Unlabeled Data to Track Memorization
Mahsa Forouzesh
Hanie Sedghi
Patrick Thiran
NoLa
TDI
34
4
0
08 Dec 2022
DINER: Disorder-Invariant Implicit Neural Representation
DINER: Disorder-Invariant Implicit Neural Representation
Shaowen Xie
Hao Zhu
Zhen Liu
Qi Zhang
You Zhou
Xun Cao
Zhan Ma
29
30
0
15 Nov 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
38
11
0
15 Nov 2022
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
Nikolaos Tsilivis
Julia Kempe
AAML
44
17
0
11 Oct 2022
Joint Embedding Self-Supervised Learning in the Kernel Regime
Joint Embedding Self-Supervised Learning in the Kernel Regime
B. Kiani
Randall Balestriero
Yubei Chen
S. Lloyd
Yann LeCun
SSL
49
13
0
29 Sep 2022
Lazy vs hasty: linearization in deep networks impacts learning schedule
  based on example difficulty
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
Thomas George
Guillaume Lajoie
A. Baratin
31
5
0
19 Sep 2022
On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks
On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks
Hubert Leterme
K. Polisano
V. Perrier
Alahari Karteek
FAtt
38
2
0
19 Sep 2022
Approximation results for Gradient Descent trained Shallow Neural
  Networks in $1d$
Approximation results for Gradient Descent trained Shallow Neural Networks in 1d1d1d
R. Gentile
G. Welper
ODL
52
6
0
17 Sep 2022
Robustness in deep learning: The good (width), the bad (depth), and the
  ugly (initialization)
Robustness in deep learning: The good (width), the bad (depth), and the ugly (initialization)
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
39
19
0
15 Sep 2022
On the Implicit Bias in Deep-Learning Algorithms
On the Implicit Bias in Deep-Learning Algorithms
Gal Vardi
FedML
AI4CE
34
72
0
26 Aug 2022
Single Model Uncertainty Estimation via Stochastic Data Centering
Single Model Uncertainty Estimation via Stochastic Data Centering
Jayaraman J. Thiagarajan
Rushil Anirudh
V. Narayanaswamy
P. Bremer
UQCV
OOD
27
26
0
14 Jul 2022
Why Quantization Improves Generalization: NTK of Binary Weight Neural
  Networks
Why Quantization Improves Generalization: NTK of Binary Weight Neural Networks
Kaiqi Zhang
Ming Yin
Yu-Xiang Wang
MQ
24
4
0
13 Jun 2022
Overcoming the Spectral Bias of Neural Value Approximation
Overcoming the Spectral Bias of Neural Value Approximation
Ge Yang
Anurag Ajay
Pulkit Agrawal
34
25
0
09 Jun 2022
Gradient flow dynamics of shallow ReLU networks for square loss and
  orthogonal inputs
Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputs
Etienne Boursier
Loucas Pillaud-Vivien
Nicolas Flammarion
ODL
24
58
0
02 Jun 2022
Trading Positional Complexity vs. Deepness in Coordinate Networks
Trading Positional Complexity vs. Deepness in Coordinate Networks
Jianqiao Zheng
Sameera Ramasinghe
Xueqian Li
Simon Lucey
28
18
0
18 May 2022
On the Spectral Bias of Convolutional Neural Tangent and Gaussian
  Process Kernels
On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels
Amnon Geifman
Meirav Galun
David Jacobs
Ronen Basri
30
13
0
17 Mar 2022
The Spectral Bias of Polynomial Neural Networks
The Spectral Bias of Polynomial Neural Networks
Moulik Choraria
L. Dadi
Grigorios G. Chrysos
Julien Mairal
V. Cevher
24
18
0
27 Feb 2022
Understanding Square Loss in Training Overparametrized Neural Network
  Classifiers
Understanding Square Loss in Training Overparametrized Neural Network Classifiers
Tianyang Hu
Jun Wang
Wei Cao
Zhenguo Li
UQCV
AAML
41
19
0
07 Dec 2021
Understanding Layer-wise Contributions in Deep Neural Networks through
  Spectral Analysis
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Yatin Dandi
Arthur Jacot
FAtt
23
4
0
06 Nov 2021
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
A Johnson--Lindenstrauss Framework for Randomly Initialized CNNs
Ido Nachum
Jan Hkazla
Michael C. Gastpar
Anatoly Khina
36
0
0
03 Nov 2021
Provable Regret Bounds for Deep Online Learning and Control
Provable Regret Bounds for Deep Online Learning and Control
Xinyi Chen
Edgar Minasyan
Jason D. Lee
Elad Hazan
36
6
0
15 Oct 2021
New Insights into Graph Convolutional Networks using Neural Tangent
  Kernels
New Insights into Graph Convolutional Networks using Neural Tangent Kernels
Mahalakshmi Sabanayagam
P. Esser
D. Ghoshdastidar
23
6
0
08 Oct 2021
How Does Knowledge Graph Embedding Extrapolate to Unseen Data: A
  Semantic Evidence View
How Does Knowledge Graph Embedding Extrapolate to Unseen Data: A Semantic Evidence View
Ren Li
Yanan Cao
Qiannan Zhu
Guanqun Bi
Fang Fang
Yi Liu
Qian Li
29
68
0
24 Sep 2021
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite
  Width Neural Networks
Simple, Fast, and Flexible Framework for Matrix Completion with Infinite Width Neural Networks
Adityanarayanan Radhakrishnan
George Stefanakis
M. Belkin
Caroline Uhler
30
25
0
31 Jul 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
30
31
0
16 Jun 2021
What can linearized neural networks actually say about generalization?
What can linearized neural networks actually say about generalization?
Guillermo Ortiz-Jiménez
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
29
43
0
12 Jun 2021
A Neural Tangent Kernel Perspective of GANs
A Neural Tangent Kernel Perspective of GANs
Jean-Yves Franceschi
Emmanuel de Bézenac
Ibrahim Ayed
Mickaël Chen
Sylvain Lamprier
Patrick Gallinari
34
26
0
10 Jun 2021
Neural Active Learning with Performance Guarantees
Neural Active Learning with Performance Guarantees
Pranjal Awasthi
Christoph Dann
Claudio Gentile
Ayush Sekhari
Zhilei Wang
26
22
0
06 Jun 2021
Fundamental tradeoffs between memorization and robustness in random
  features and neural tangent regimes
Fundamental tradeoffs between memorization and robustness in random features and neural tangent regimes
Elvis Dohmatob
25
9
0
04 Jun 2021
Relative stability toward diffeomorphisms indicates performance in deep
  nets
Relative stability toward diffeomorphisms indicates performance in deep nets
Leonardo Petrini
Alessandro Favero
Mario Geiger
M. Wyart
OOD
38
15
0
06 May 2021
SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization
SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization
Amir Hertz
Or Perel
Raja Giryes
O. Sorkine-Hornung
Daniel Cohen-Or
28
67
0
19 Apr 2021
Learning with invariances in random features and kernel models
Learning with invariances in random features and kernel models
Song Mei
Theodor Misiakiewicz
Andrea Montanari
OOD
46
89
0
25 Feb 2021
Gradient Starvation: A Learning Proclivity in Neural Networks
Gradient Starvation: A Learning Proclivity in Neural Networks
Mohammad Pezeshki
Sekouba Kaba
Yoshua Bengio
Aaron Courville
Doina Precup
Guillaume Lajoie
MLT
50
257
0
18 Nov 2020
Linear Mode Connectivity in Multitask and Continual Learning
Linear Mode Connectivity in Multitask and Continual Learning
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Dilan Görür
Razvan Pascanu
H. Ghasemzadeh
CLL
37
139
0
09 Oct 2020
12
Next