ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.14397
  4. Cited By
Deep Equals Shallow for ReLU Networks in Kernel Regimes

Deep Equals Shallow for ReLU Networks in Kernel Regimes

30 September 2020
A. Bietti
Francis R. Bach
ArXivPDFHTML

Papers citing "Deep Equals Shallow for ReLU Networks in Kernel Regimes"

50 / 69 papers shown
Title
Super-fast rates of convergence for Neural Networks Classifiers under the Hard Margin Condition
Super-fast rates of convergence for Neural Networks Classifiers under the Hard Margin Condition
Nathanael Tepakbong
Ding-Xuan Zhou
Xiang Zhou
36
0
0
13 May 2025
Fractal and Regular Geometry of Deep Neural Networks
Fractal and Regular Geometry of Deep Neural Networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
S. Vigogna
MDE
AI4CE
31
0
0
08 Apr 2025
Neural Tangent Kernel of Neural Networks with Loss Informed by Differential Operators
Weiye Gan
Yicheng Li
Q. Lin
Zuoqiang Shi
39
0
0
14 Mar 2025
A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis
A Gap Between the Gaussian RKHS and Neural Networks: An Infinite-Center Asymptotic Analysis
Akash Kumar
Rahul Parhi
Mikhail Belkin
41
0
0
22 Feb 2025
Gradient Descent Finds Over-Parameterized Neural Networks with Sharp
  Generalization for Nonparametric Regression
Gradient Descent Finds Over-Parameterized Neural Networks with Sharp Generalization for Nonparametric Regression
Yingzhen Yang
Ping Li
MLT
32
0
0
05 Nov 2024
Sample-efficient Bayesian Optimisation Using Known Invariances
Sample-efficient Bayesian Optimisation Using Known Invariances
Theodore Brown
Alexandru Cioba
Ilija Bogunovic
28
2
0
22 Oct 2024
A Lipschitz spaces view of infinitely wide shallow neural networks
A Lipschitz spaces view of infinitely wide shallow neural networks
Francesca Bartolucci
Marcello Carioni
José A. Iglesias
Yury Korolev
Emanuele Naldi
S. Vigogna
18
0
0
18 Oct 2024
On the Impacts of the Random Initialization in the Neural Tangent Kernel
  Theory
On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory
Guhan Chen
Yicheng Li
Qian Lin
AAML
38
1
0
08 Oct 2024
Improving Adaptivity via Over-Parameterization in Sequence Models
Improving Adaptivity via Over-Parameterization in Sequence Models
Yicheng Li
Qian Lin
27
1
0
02 Sep 2024
Bounds for the smallest eigenvalue of the NTK for arbitrary spherical
  data of arbitrary dimension
Bounds for the smallest eigenvalue of the NTK for arbitrary spherical data of arbitrary dimension
Kedar Karhadkar
Michael Murray
Guido Montúfar
32
2
0
23 May 2024
Spectral complexity of deep neural networks
Spectral complexity of deep neural networks
Simmaco Di Lillo
Domenico Marinucci
Michele Salvi
S. Vigogna
BDL
74
1
0
15 May 2024
Sliding down the stairs: how correlated latent variables accelerate
  learning with neural networks
Sliding down the stairs: how correlated latent variables accelerate learning with neural networks
Lorenzo Bardone
Sebastian Goldt
30
7
0
12 Apr 2024
Neural reproducing kernel Banach spaces and representer theorems for
  deep networks
Neural reproducing kernel Banach spaces and representer theorems for deep networks
Francesca Bartolucci
E. De Vito
Lorenzo Rosasco
S. Vigogna
44
4
0
13 Mar 2024
Towards Understanding Inductive Bias in Transformers: A View From
  Infinity
Towards Understanding Inductive Bias in Transformers: A View From Infinity
Itay Lavie
Guy Gur-Ari
Z. Ringel
32
1
0
07 Feb 2024
Generalization in Kernel Regression Under Realistic Assumptions
Generalization in Kernel Regression Under Realistic Assumptions
Daniel Barzilai
Ohad Shamir
29
14
0
26 Dec 2023
On the Nystrom Approximation for Preconditioning in Kernel Machines
On the Nystrom Approximation for Preconditioning in Kernel Machines
Amirhesam Abedsoltan
Parthe Pandit
Luis Rademacher
Misha Belkin
16
3
0
06 Dec 2023
The Expressive Power of Low-Rank Adaptation
The Expressive Power of Low-Rank Adaptation
Yuchen Zeng
Kangwook Lee
28
49
0
26 Oct 2023
On the Foundations of Shortcut Learning
On the Foundations of Shortcut Learning
Katherine Hermann
Hossein Mobahi
Thomas Fel
M. C. Mozer
VLM
30
26
0
24 Oct 2023
On the Asymptotic Learning Curves of Kernel Ridge Regression under
  Power-law Decay
On the Asymptotic Learning Curves of Kernel Ridge Regression under Power-law Decay
Yicheng Li
Hao Zhang
Qian Lin
27
12
0
23 Sep 2023
How many Neurons do we need? A refined Analysis for Shallow Networks
  trained with Gradient Descent
How many Neurons do we need? A refined Analysis for Shallow Networks trained with Gradient Descent
Mike Nguyen
Nicole Mücke
MLT
16
5
0
14 Sep 2023
Optimal Rate of Kernel Regression in Large Dimensions
Optimal Rate of Kernel Regression in Large Dimensions
Weihao Lu
Hao Zhang
Yicheng Li
Manyun Xu
Qian Lin
34
5
0
08 Sep 2023
Non-Parametric Representation Learning with Kernels
Non-Parametric Representation Learning with Kernels
P. Esser
Maximilian Fleissner
D. Ghoshdastidar
SSL
24
4
0
05 Sep 2023
Controlling the Inductive Bias of Wide Neural Networks by Modifying the
  Kernel's Spectrum
Controlling the Inductive Bias of Wide Neural Networks by Modifying the Kernel's Spectrum
Amnon Geifman
Daniel Barzilai
Ronen Basri
Meirav Galun
26
4
0
26 Jul 2023
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
23
11
0
12 Jul 2023
Kernels, Data & Physics
Kernels, Data & Physics
Francesco Cagnetta
Deborah Oliveira
Mahalakshmi Sabanayagam
Nikolaos Tsilivis
Julia Kempe
25
0
0
05 Jul 2023
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Neural Hilbert Ladders: Multi-Layer Neural Networks in Function Space
Zhengdao Chen
30
1
0
03 Jul 2023
A Quantitative Functional Central Limit Theorem for Shallow Neural
  Networks
A Quantitative Functional Central Limit Theorem for Shallow Neural Networks
Valentina Cammarota
Domenico Marinucci
M. Salvi
S. Vigogna
34
7
0
29 Jun 2023
The $L^\infty$ Learnability of Reproducing Kernel Hilbert Spaces
The L∞L^\inftyL∞ Learnability of Reproducing Kernel Hilbert Spaces
Hongrui Chen
Jihao Long
Lei Wu
8
0
0
05 Jun 2023
A Rainbow in Deep Network Black Boxes
A Rainbow in Deep Network Black Boxes
Florentin Guth
Brice Ménard
G. Rochette
S. Mallat
17
10
0
29 May 2023
Mind the spikes: Benign overfitting of kernels and neural networks in
  fixed dimension
Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension
Moritz Haas
David Holzmüller
U. V. Luxburg
Ingo Steinwart
MLT
35
13
0
23 May 2023
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
Chanwoo Chun
Daniel D. Lee
BDL
33
2
0
17 May 2023
On the Eigenvalue Decay Rates of a Class of Neural-Network Related
  Kernel Functions Defined on General Domains
On the Eigenvalue Decay Rates of a Class of Neural-Network Related Kernel Functions Defined on General Domains
Yicheng Li
Zixiong Yu
Y. Cotronis
Qian Lin
55
13
0
04 May 2023
Adaptation to Misspecified Kernel Regularity in Kernelised Bandits
Adaptation to Misspecified Kernel Regularity in Kernelised Bandits
Yusha Liu
Aarti Singh
13
1
0
26 Apr 2023
Sparse Gaussian Processes with Spherical Harmonic Features Revisited
Sparse Gaussian Processes with Spherical Harmonic Features Revisited
Stefanos Eleftheriadis
Dominic Richards
J. Hensman
16
1
0
28 Mar 2023
Kernel interpolation generalizes poorly
Kernel interpolation generalizes poorly
Yicheng Li
Haobo Zhang
Qian Lin
31
11
0
28 Mar 2023
Generalization Ability of Wide Neural Networks on $\mathbb{R}$
Generalization Ability of Wide Neural Networks on R\mathbb{R}R
Jianfa Lai
Manyun Xu
Rui Chen
Qi-Rong Lin
16
21
0
12 Feb 2023
A Kernel Perspective of Skip Connections in Convolutional Networks
A Kernel Perspective of Skip Connections in Convolutional Networks
Daniel Barzilai
Amnon Geifman
Meirav Galun
Ronen Basri
17
11
0
27 Nov 2022
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width
  Bayesian Neural Networks
An Empirical Analysis of the Advantages of Finite- v.s. Infinite-Width Bayesian Neural Networks
Jiayu Yao
Yaniv Yacoby
Beau Coker
Weiwei Pan
Finale Doshi-Velez
19
1
0
16 Nov 2022
Characterizing the Spectrum of the NTK via a Power Series Expansion
Characterizing the Spectrum of the NTK via a Power Series Expansion
Michael Murray
Hui Jin
Benjamin Bowman
Guido Montúfar
30
11
0
15 Nov 2022
Generalization Properties of NAS under Activation and Skip Connection
  Search
Generalization Properties of NAS under Activation and Skip Connection Search
Zhenyu Zhu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
AI4CE
17
14
0
15 Sep 2022
What Can Be Learnt With Wide Convolutional Neural Networks?
What Can Be Learnt With Wide Convolutional Neural Networks?
Francesco Cagnetta
Alessandro Favero
M. Wyart
MLT
38
11
0
01 Aug 2022
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Benign, Tempered, or Catastrophic: A Taxonomy of Overfitting
Neil Rohit Mallinar
James B. Simon
Amirhesam Abedsoltan
Parthe Pandit
M. Belkin
Preetum Nakkiran
24
37
0
14 Jul 2022
Graph Neural Network Bandits
Graph Neural Network Bandits
Parnian Kassraie
Andreas Krause
Ilija Bogunovic
26
11
0
13 Jul 2022
Learning sparse features can lead to overfitting in neural networks
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini
Francesco Cagnetta
Eric Vanden-Eijnden
M. Wyart
MLT
31
23
0
24 Jun 2022
VC Theoretical Explanation of Double Descent
VC Theoretical Explanation of Double Descent
Eng Hock Lee
V. Cherkassky
22
3
0
31 May 2022
Sobolev Acceleration and Statistical Optimality for Learning Elliptic
  Equations via Gradient Descent
Sobolev Acceleration and Statistical Optimality for Learning Elliptic Equations via Gradient Descent
Yiping Lu
Jose H. Blanchet
Lexing Ying
32
7
0
15 May 2022
On the Spectral Bias of Convolutional Neural Tangent and Gaussian
  Process Kernels
On the Spectral Bias of Convolutional Neural Tangent and Gaussian Process Kernels
Amnon Geifman
Meirav Galun
David Jacobs
Ronen Basri
21
13
0
17 Mar 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
30
0
0
03 Jan 2022
Eigenspace Restructuring: a Principle of Space and Frequency in Neural
  Networks
Eigenspace Restructuring: a Principle of Space and Frequency in Neural Networks
Lechao Xiao
26
21
0
10 Dec 2021
Understanding Layer-wise Contributions in Deep Neural Networks through
  Spectral Analysis
Understanding Layer-wise Contributions in Deep Neural Networks through Spectral Analysis
Yatin Dandi
Arthur Jacot
FAtt
21
4
0
06 Nov 2021
12
Next