ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.13198
  4. Cited By
Spectral Bias and Task-Model Alignment Explain Generalization in Kernel
  Regression and Infinitely Wide Neural Networks

Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks

23 June 2020
Abdulkadir Canatar
Blake Bordelon
C. Pehlevan
ArXivPDFHTML

Papers citing "Spectral Bias and Task-Model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks"

34 / 34 papers shown
Title
Generalization through variance: how noise shapes inductive biases in diffusion models
Generalization through variance: how noise shapes inductive biases in diffusion models
John J. Vastola
DiffM
144
1
0
16 Apr 2025
On the similarity of bandwidth-tuned quantum kernels and classical kernels
On the similarity of bandwidth-tuned quantum kernels and classical kernels
Roberto Flórez Ablan
M. Roth
Jan Schnabel
60
2
0
07 Mar 2025
Estimating the Spectral Moments of the Kernel Integral Operator from Finite Sample Matrices
Estimating the Spectral Moments of the Kernel Integral Operator from Finite Sample Matrices
Chanwoo Chun
SueYeon Chung
Daniel D. Lee
24
1
0
23 Oct 2024
Breaking Neural Network Scaling Laws with Modularity
Breaking Neural Network Scaling Laws with Modularity
Akhilan Boopathy
Sunshine Jiang
William Yue
Jaedong Hwang
Abhiram Iyer
Ila Fiete
OOD
39
2
0
09 Sep 2024
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying
  Bandwidth or Dimensionality
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or Dimensionality
Marko Medvedev
Gal Vardi
Nathan Srebro
62
3
0
05 Sep 2024
When does compositional structure yield compositional generalization? A kernel theory
When does compositional structure yield compositional generalization? A kernel theory
Samuel Lippl
Kim Stachenfeld
NAI
CoGe
73
5
0
26 May 2024
Dissecting the Interplay of Attention Paths in a Statistical Mechanics
  Theory of Transformers
Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers
Lorenzo Tiberi
Francesca Mignacco
Kazuki Irie
H. Sompolinsky
42
6
0
24 May 2024
NTK-Guided Few-Shot Class Incremental Learning
NTK-Guided Few-Shot Class Incremental Learning
Jingren Liu
Zhong Ji
Yanwei Pang
Yunlong Yu
CLL
34
3
0
19 Mar 2024
Modify Training Directions in Function Space to Reduce Generalization
  Error
Modify Training Directions in Function Space to Reduce Generalization Error
Yi Yu
Wenlian Lu
Boyu Chen
19
0
0
25 Jul 2023
Higher-order topological kernels via quantum computation
Higher-order topological kernels via quantum computation
Massimiliano Incudini
F. Martini
Alessandra Di Pierro
14
1
0
14 Jul 2023
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Chanwoo Chun
Daniel D. Lee
BDL
33
2
0
17 May 2023
Do deep neural networks have an inbuilt Occam's razor?
Do deep neural networks have an inbuilt Occam's razor?
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCV
BDL
19
15
0
13 Apr 2023
On the Stepwise Nature of Self-Supervised Learning
On the Stepwise Nature of Self-Supervised Learning
James B. Simon
Maksis Knutins
Liu Ziyin
Daniel Geisz
Abraham J. Fetterman
Joshua Albrecht
SSL
32
29
0
27 Mar 2023
Analyzing Convergence in Quantum Neural Networks: Deviations from Neural
  Tangent Kernels
Analyzing Convergence in Quantum Neural Networks: Deviations from Neural Tangent Kernels
Xuchen You
Shouvanik Chakrabarti
Boyang Chen
Xiaodi Wu
32
10
0
26 Mar 2023
A Solvable Model of Neural Scaling Laws
A Solvable Model of Neural Scaling Laws
A. Maloney
Daniel A. Roberts
J. Sully
31
51
0
30 Oct 2022
Automatic and effective discovery of quantum kernels
Automatic and effective discovery of quantum kernels
Massimiliano Incudini
Daniele Lizzio Bosco
F. Martini
Michele Grossi
Giuseppe Serra
Alessandra Di Pierro
31
4
0
22 Sep 2022
On the Activation Function Dependence of the Spectral Bias of Neural
  Networks
On the Activation Function Dependence of the Spectral Bias of Neural Networks
Q. Hong
Jonathan W. Siegel
Qinyan Tan
Jinchao Xu
32
22
0
09 Aug 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Target alignment in truncated kernel ridge regression
Target alignment in truncated kernel ridge regression
Arash A. Amini
R. Baumgartner
Dai Feng
9
3
0
28 Jun 2022
Overcoming the Spectral Bias of Neural Value Approximation
Overcoming the Spectral Bias of Neural Value Approximation
Ge Yang
Anurag Ajay
Pulkit Agrawal
32
25
0
09 Jun 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
24
79
0
19 May 2022
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Sharp Asymptotics of Kernel Ridge Regression Beyond the Linear Regime
Hong Hu
Yue M. Lu
51
15
0
13 May 2022
Contrasting random and learned features in deep Bayesian linear
  regression
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
C. Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral
  Conditions
Tight Convergence Rate Bounds for Optimization Under Power Law Spectral Conditions
Maksim Velikanov
Dmitry Yarotsky
4
6
0
02 Feb 2022
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Z. Ringel
30
49
0
31 Dec 2021
Learning Curves for Continual Learning in Neural Networks:
  Self-Knowledge Transfer and Forgetting
Learning Curves for Continual Learning in Neural Networks: Self-Knowledge Transfer and Forgetting
Ryo Karakida
S. Akaho
CLL
24
11
0
03 Dec 2021
Learning with convolution and pooling operations in kernel methods
Learning with convolution and pooling operations in kernel methods
Theodor Misiakiewicz
Song Mei
MLT
15
29
0
16 Nov 2021
Representation Learning via Quantum Neural Tangent Kernels
Representation Learning via Quantum Neural Tangent Kernels
Junyu Liu
F. Tacchino
Jennifer R. Glick
Liang Jiang
Antonio Mezzacapo
14
60
0
08 Nov 2021
Neural Networks as Kernel Learners: The Silent Alignment Effect
Neural Networks as Kernel Learners: The Silent Alignment Effect
Alexander B. Atanasov
Blake Bordelon
C. Pehlevan
MLT
22
74
0
29 Oct 2021
Locality defeats the curse of dimensionality in convolutional
  teacher-student scenarios
Locality defeats the curse of dimensionality in convolutional teacher-student scenarios
Alessandro Favero
Francesco Cagnetta
M. Wyart
24
31
0
16 Jun 2021
A self consistent theory of Gaussian Processes captures feature learning
  effects in finite CNNs
A self consistent theory of Gaussian Processes captures feature learning effects in finite CNNs
Gadi Naveh
Z. Ringel
SSL
MLT
23
31
0
08 Jun 2021
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy
  Regime
Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime
Stéphane dÁscoli
Maria Refinetti
Giulio Biroli
Florent Krzakala
93
152
0
02 Mar 2020
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks
Blake Bordelon
Abdulkadir Canatar
C. Pehlevan
139
201
0
07 Feb 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
231
4,460
0
23 Jan 2020
1