ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.02561
  4. Cited By
Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural
  Networks
v1v2v3v4v5v6v7 (latest)

Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks

International Conference on Machine Learning (ICML), 2020
7 February 2020
Blake Bordelon
Abdulkadir Canatar
Cengiz Pehlevan
ArXiv (abs)PDFHTML

Papers citing "Spectrum Dependent Learning Curves in Kernel Regression and Wide Neural Networks"

50 / 149 papers shown
Dynamical Implicit Neural Representations
Dynamical Implicit Neural Representations
Yesom Park
Kelvin Kan
Thomas Flynn
Yi Huang
Shinjae Yoo
Stanley Osher
Xihaier Luo
AI4CE
78
0
0
26 Nov 2025
Condition Numbers and Eigenvalue Spectra of Shallow Networks on Spheres
Condition Numbers and Eigenvalue Spectra of Shallow Networks on Spheres
Xinliang Liu
Tong Mao
Jinchao Xu
239
0
0
04 Nov 2025
Revisiting Knowledge Distillation: The Hidden Role of Dataset Size
Revisiting Knowledge Distillation: The Hidden Role of Dataset Size
Giulia Lanzillotta
Felix Sarnthein
Gil Kur
Thomas Hofmann
Bobby He
174
0
0
17 Oct 2025
INR-Bench: A Unified Benchmark for Implicit Neural Representations in Multi-Domain Regression and Reconstruction
INR-Bench: A Unified Benchmark for Implicit Neural Representations in Multi-Domain Regression and Reconstruction
L. Li
Fengyi Zhang
Zhong Wang
Lin Zhang
Ying Shen
160
1
0
11 Oct 2025
Theory of Scaling Laws for In-Context Regression: Depth, Width, Context and Time
Theory of Scaling Laws for In-Context Regression: Depth, Width, Context and Time
Blake Bordelon
Mary I. Letey
Cengiz Pehlevan
215
5
0
01 Oct 2025
Scaling Laws and Spectra of Shallow Neural Networks in the Feature Learning Regime
Scaling Laws and Spectra of Shallow Neural Networks in the Feature Learning Regime
Leonardo Defilippis
Yizhou Xu
Julius Girardin
Emanuele Troiani
Vittorio Erba
Lenka Zdeborová
Bruno Loureiro
Florent Krzakala
189
7
0
29 Sep 2025
Theoretical Foundations of Representation Learning using Unlabeled Data: Statistics and Optimization
Theoretical Foundations of Representation Learning using Unlabeled Data: Statistics and Optimization
Pascal Esser
Maximilian Fleissner
Debarghya Ghoshdastidar
SSL
274
0
0
23 Sep 2025
FW-GAN: Frequency-Driven Handwriting Synthesis with Wave-Modulated MLP Generator
FW-GAN: Frequency-Driven Handwriting Synthesis with Wave-Modulated MLP Generator
Huynh Tong Dang Khoa
Dang Hoai Nam
Vo Nguyen Le Duy
132
0
0
28 Aug 2025
A Ridge Too Far: Correcting Over-Shrinkage via Negative Regularization
A Ridge Too Far: Correcting Over-Shrinkage via Negative Regularization
Dongseok Kim
Wonjun Jeong
305
0
0
24 Aug 2025
Feature learning is decoupled from generalization in high capacity neural networks
Feature learning is decoupled from generalization in high capacity neural networks
Niclas Goring
Charles London
Abdurrahman Hadi Erturk
Chris Mingard
Yoonsoo Nam
Ard A. Louis
OODMLT
309
2
0
25 Jul 2025
Statistical mechanics of extensive-width Bayesian neural networks near interpolation
Statistical mechanics of extensive-width Bayesian neural networks near interpolation
Jean Barbier
Francesco Camilli
Minh-Toan Nguyen
Mauro Pastore
Rudy Skerk
262
1
0
30 May 2025
Learning Curves of Stochastic Gradient Descent in Kernel Regression
Learning Curves of Stochastic Gradient Descent in Kernel Regression
Haihan Zhang
Weicheng Lin
Yuanshi Liu
Cong Fang
204
2
0
28 May 2025
Saddle-To-Saddle Dynamics in Deep ReLU Networks: Low-Rank Bias in the First Saddle Escape
Saddle-To-Saddle Dynamics in Deep ReLU Networks: Low-Rank Bias in the First Saddle Escape
Ioannis Bantzis
James B. Simon
Arthur Jacot
ODL
433
2
0
27 May 2025
Superposition Yields Robust Neural Scaling
Superposition Yields Robust Neural Scaling
Yizhou Liu
Ziming Liu
Jeff Gore
MILM
758
13
0
15 May 2025
Learning curves theory for hierarchically compositional data with power-law distributed features
Learning curves theory for hierarchically compositional data with power-law distributed features
Francesco Cagnetta
Hyunmo Kang
Matthieu Wyart
393
6
0
11 May 2025
Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures
Scaling Laws and Representation Learning in Simple Hierarchical Languages: Transformers vs. Convolutional Architectures
Francesco Cagnetta
Alessandro Favero
Antonio Sclocchi
Matthieu Wyart
443
3
0
11 May 2025
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regimeAnnual Conference Computational Learning Theory (COLT), 2025
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
1.0K
3
0
06 May 2025
Generalization through variance: how noise shapes inductive biases in diffusion models
Generalization through variance: how noise shapes inductive biases in diffusion modelsInternational Conference on Learning Representations (ICLR), 2025
John J. Vastola
DiffM
1.2K
19
0
16 Apr 2025
Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry
Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry
Chi-Ning Chou
Hang Le
Yichen Wang
SueYeon Chung
499
5
0
23 Mar 2025
A Multi-Power Law for Loss Curve Prediction Across Learning Rate Schedules
A Multi-Power Law for Loss Curve Prediction Across Learning Rate SchedulesInternational Conference on Learning Representations (ICLR), 2025
Kairong Luo
Haodong Wen
Shengding Hu
Zhenbo Sun
Zhiyuan Liu
Maosong Sun
Kaifeng Lyu
Wenguang Chen
CLL
326
19
0
17 Mar 2025
Uncertainty Quantification From Scaling Laws in Deep Neural Networks
Uncertainty Quantification From Scaling Laws in Deep Neural Networks
Ibrahim Elsharkawy
Yonatan Kahn
Benjamin Hooberman
UQCV
312
0
0
07 Mar 2025
Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking)
Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking)
Yoonsoo Nam
Seok Hyeong Lee
Clementine Domine
Yea Chan Park
Charles London
Wonyl Choi
Niclas Goring
Seungjai Lee
AI4CE
633
5
0
28 Feb 2025
Spectral Analysis of Representational Similarity with Limited Neurons
Spectral Analysis of Representational Similarity with Limited Neurons
Hyunmo Kang
Abdulkadir Canatar
SueYeon Chung
526
3
0
27 Feb 2025
A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-Offs
A High Dimensional Statistical Model for Adversarial Training: Geometry and Trade-OffsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Kasimir Tanner
Matteo Vilucchio
Bruno Loureiro
Florent Krzakala
AAML
484
4
0
31 Dec 2024
On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory
On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory
Andrea Perin
Stéphane Deny
634
9
0
16 Dec 2024
Loss-to-Loss Prediction: Scaling Laws for All Datasets
Loss-to-Loss Prediction: Scaling Laws for All Datasets
David Brandfonbrener
Nikhil Anand
Nikhil Vyas
Eran Malach
Sham Kakade
339
13
0
19 Nov 2024
A Random Matrix Theory Perspective on the Spectrum of Learned Features
  and Asymptotic Generalization Capabilities
A Random Matrix Theory Perspective on the Spectrum of Learned Features and Asymptotic Generalization CapabilitiesInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Yatin Dandi
Luca Pesce
Hugo Cui
Florent Krzakala
Yue M. Lu
Bruno Loureiro
MLT
391
11
0
24 Oct 2024
A Simple Model of Inference Scaling Laws
A Simple Model of Inference Scaling Laws
Noam Levi
LRM
270
25
0
21 Oct 2024
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling LawsInternational Conference on Learning Representations (ICLR), 2024
Blake Bordelon
Alexander B. Atanasov
Cengiz Pehlevan
561
43
0
26 Sep 2024
Statistical Mechanics of Min-Max Problems
Statistical Mechanics of Min-Max Problems
Yuma Ichikawa
Koji Hukushima
339
2
0
09 Sep 2024
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying
  Bandwidth or Dimensionality
Overfitting Behaviour of Gaussian Kernel Ridgeless Regression: Varying Bandwidth or DimensionalityNeural Information Processing Systems (NeurIPS), 2024
Marko Medvedev
Gal Vardi
Nathan Srebro
254
9
0
05 Sep 2024
Improving Adaptivity via Over-Parameterization in Sequence Models
Improving Adaptivity via Over-Parameterization in Sequence ModelsNeural Information Processing Systems (NeurIPS), 2024
Yicheng Li
Qian Lin
322
2
0
02 Sep 2024
On the Pinsker bound of inner product kernel regression in large dimensions
On the Pinsker bound of inner product kernel regression in large dimensions
Weihao Lu
Jialin Ding
Haobo Zhang
Qian Lin
364
1
0
02 Sep 2024
Implicit Regularization Paths of Weighted Neural Representations
Implicit Regularization Paths of Weighted Neural RepresentationsNeural Information Processing Systems (NeurIPS), 2024
Jin-Hong Du
Pratik Patil
310
1
0
28 Aug 2024
Risk and cross validation in ridge regression with correlated samples
Risk and cross validation in ridge regression with correlated samples
Alexander B. Atanasov
Jacob A. Zavatone-Veth
Cengiz Pehlevan
591
9
0
08 Aug 2024
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Parameter-Efficient Fine-Tuning for Continual Learning: A Neural Tangent Kernel Perspective
Jingren Liu
Zhong Ji
YunLong Yu
Jiale Cao
Yanwei Pang
Jungong Han
Xuelong Li
CLL
455
5
0
24 Jul 2024
Early learning of the optimal constant solution in neural networks and
  humans
Early learning of the optimal constant solution in neural networks and humans
Jirko Rubruck
Jan P. Bauer
Andrew M. Saxe
Christopher Summerfield
455
6
0
25 Jun 2024
A rationale from frequency perspective for grokking in training neural
  network
A rationale from frequency perspective for grokking in training neural network
Zhangchen Zhou
Yaoyu Zhang
Z. Xu
390
3
0
24 May 2024
Understanding the dynamics of the frequency bias in neural networks
Understanding the dynamics of the frequency bias in neural networks
Juan Molina
Mircea Petrache
F. Sahli Costabal
Matías Courdurier
358
5
0
23 May 2024
Restoring balance: principled under/oversampling of data for optimal classification
Restoring balance: principled under/oversampling of data for optimal classificationInternational Conference on Machine Learning (ICML), 2024
Emanuele Loffredo
Mauro Pastore
Simona Cocco
R. Monasson
341
13
0
15 May 2024
Wilsonian Renormalization of Neural Network Gaussian Processes
Wilsonian Renormalization of Neural Network Gaussian Processes
Jessica N. Howard
Ro Jefferson
Anindita Maiti
Zohar Ringel
BDL
584
10
0
09 May 2024
Loss Jump During Loss Switch in Solving PDEs with Neural Networks
Loss Jump During Loss Switch in Solving PDEs with Neural NetworksCommunications in Computational Physics (Commun. Comput. Phys.), 2024
Zhiwei Wang
Lulu Zhang
Zhongwang Zhang
Z. Xu
278
2
0
06 May 2024
Sliding down the stairs: how correlated latent variables accelerate
  learning with neural networks
Sliding down the stairs: how correlated latent variables accelerate learning with neural networks
Lorenzo Bardone
Sebastian Goldt
330
13
0
12 Apr 2024
Generalization error of spectral algorithms
Generalization error of spectral algorithmsInternational Conference on Learning Representations (ICLR), 2024
Maksim Velikanov
Maxim Panov
Dmitry Yarotsky
368
1
0
18 Mar 2024
Near-Interpolators: Rapid Norm Growth and the Trade-Off between
  Interpolation and Generalization
Near-Interpolators: Rapid Norm Growth and the Trade-Off between Interpolation and GeneralizationInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2024
Yutong Wang
Rishi Sonthalia
Wei Hu
379
8
0
12 Mar 2024
Asymptotic Theory for Linear Functionals of Kernel Ridge Regression
Asymptotic Theory for Linear Functionals of Kernel Ridge Regression
Rui Tuo
Lu Zou
245
4
0
07 Mar 2024
Neural population geometry and optimal coding of tasks with shared
  latent structure
Neural population geometry and optimal coding of tasks with shared latent structure
Albert J. Wakhloo
Will Slatton
SueYeon Chung
308
8
0
26 Feb 2024
Model Collapse Demystified: The Case of Regression
Model Collapse Demystified: The Case of Regression
Elvis Dohmatob
Yunzhen Feng
Julia Kempe
490
68
0
12 Feb 2024
A Tale of Tails: Model Collapse as a Change of Scaling Laws
A Tale of Tails: Model Collapse as a Change of Scaling LawsInternational Conference on Machine Learning (ICML), 2024
Elvis Dohmatob
Yunzhen Feng
Pu Yang
Francois Charton
Julia Kempe
346
119
0
10 Feb 2024
Towards Understanding Inductive Bias in Transformers: A View From
  Infinity
Towards Understanding Inductive Bias in Transformers: A View From Infinity
Itay Lavie
Guy Gur-Ari
Zohar Ringel
384
11
0
07 Feb 2024
123
Next
Page 1 of 3