ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.00034
  4. Cited By
Neural Networks as Kernel Learners: The Silent Alignment Effect

Neural Networks as Kernel Learners: The Silent Alignment Effect

29 October 2021
Alexander B. Atanasov
Blake Bordelon
C. Pehlevan
    MLT
ArXivPDFHTML

Papers citing "Neural Networks as Kernel Learners: The Silent Alignment Effect"

50 / 57 papers shown
Title
Corner Gradient Descent
Corner Gradient Descent
Dmitry Yarotsky
36
0
0
16 Apr 2025
Dynamically Learning to Integrate in Recurrent Neural Networks
Dynamically Learning to Integrate in Recurrent Neural Networks
Blake Bordelon
Jordan Cotler
C. Pehlevan
Jacob A. Zavatone-Veth
53
2
0
24 Mar 2025
The Spectral Bias of Shallow Neural Network Learning is Shaped by the Choice of Non-linearity
Justin Sahs
Ryan Pyle
Fabio Anselmi
Ankit B. Patel
52
0
0
13 Mar 2025
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Make Haste Slowly: A Theory of Emergent Structured Mixed Selectivity in Feature Learning ReLU Networks
Devon Jarvis
Richard Klein
Benjamin Rosman
Andrew M. Saxe
MLT
64
1
0
08 Mar 2025
A Theory of Initialisation's Impact on Specialisation
Devon Jarvis
Sebastian Lee
Clémentine Dominé
Andrew M. Saxe
Stefano Sarao Mannelli
CLL
67
2
0
04 Mar 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
C. Pehlevan
AI4CE
61
1
0
04 Feb 2025
Training Dynamics of In-Context Learning in Linear Attention
Yedi Zhang
Aaditya K. Singh
Peter E. Latham
Andrew Saxe
MLT
64
1
0
28 Jan 2025
Flexible task abstractions emerge in linear networks with fast and bounded units
Flexible task abstractions emerge in linear networks with fast and bounded units
Kai Sandbrink
Jan P. Bauer
A. Proca
Andrew M. Saxe
Christopher Summerfield
Ali Hummos
63
2
0
17 Jan 2025
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture
Geometric Inductive Biases of Deep Networks: The Role of Data and Architecture
Sajad Movahedi
Antonio Orvieto
Seyed-Mohsen Moosavi-Dezfooli
AAML
AI4CE
133
0
0
15 Oct 2024
Features are fate: a theory of transfer learning in high-dimensional
  regression
Features are fate: a theory of transfer learning in high-dimensional regression
Javan Tahir
Surya Ganguli
Grant M. Rotskoff
32
1
0
10 Oct 2024
Collective variables of neural networks: empirical time evolution and
  scaling laws
Collective variables of neural networks: empirical time evolution and scaling laws
S. Tovey
Sven Krippendorf
M. Spannowsky
Konstantin Nikolaou
Christian Holm
17
0
0
09 Oct 2024
The Optimization Landscape of SGD Across the Feature Learning Strength
The Optimization Landscape of SGD Across the Feature Learning Strength
Alexander B. Atanasov
Alexandru Meterez
James B. Simon
C. Pehlevan
43
2
0
06 Oct 2024
SGD with memory: fundamental properties and stochastic acceleration
SGD with memory: fundamental properties and stochastic acceleration
Dmitry Yarotsky
Maksim Velikanov
33
1
0
05 Oct 2024
How Feature Learning Can Improve Neural Scaling Laws
How Feature Learning Can Improve Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
54
12
0
26 Sep 2024
Towards understanding epoch-wise double descent in two-layer linear
  neural networks
Towards understanding epoch-wise double descent in two-layer linear neural networks
Amanda Olmin
Fredrik Lindsten
MLT
27
3
0
13 Jul 2024
Get rich quick: exact solutions reveal how unbalanced initializations
  promote rapid feature learning
Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
D. Kunin
Allan Raventós
Clémentine Dominé
Feng Chen
David Klindt
Andrew M. Saxe
Surya Ganguli
MLT
45
15
0
10 Jun 2024
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Repetita Iuvant: Data Repetition Allows SGD to Learn High-Dimensional Multi-Index Functions
Luca Arnaboldi
Yatin Dandi
Florent Krzakala
Luca Pesce
Ludovic Stephan
61
12
0
24 May 2024
Half-Space Feature Learning in Neural Networks
Half-Space Feature Learning in Neural Networks
Mahesh Lorik Yadav
H. G. Ramaswamy
Chandrashekar Lakshminarayanan
MLT
27
0
0
05 Apr 2024
Mean-field Analysis on Two-layer Neural Networks from a Kernel
  Perspective
Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective
Shokichi Takakura
Taiji Suzuki
MLT
22
5
0
22 Mar 2024
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
Akshay Kumar
Jarvis D. Haupt
ODL
44
3
0
12 Mar 2024
Directional Convergence Near Small Initializations and Saddles in
  Two-Homogeneous Neural Networks
Directional Convergence Near Small Initializations and Saddles in Two-Homogeneous Neural Networks
Akshay Kumar
Jarvis D. Haupt
ODL
30
7
0
14 Feb 2024
A Dynamical Model of Neural Scaling Laws
A Dynamical Model of Neural Scaling Laws
Blake Bordelon
Alexander B. Atanasov
C. Pehlevan
46
36
0
02 Feb 2024
Task structure and nonlinearity jointly determine learned
  representational geometry
Task structure and nonlinearity jointly determine learned representational geometry
Matteo Alleman
Jack W. Lindsey
Stefano Fusi
38
8
0
24 Jan 2024
Manipulating Sparse Double Descent
Manipulating Sparse Double Descent
Ya Shi Zhang
19
0
0
19 Jan 2024
Rethinking Adversarial Training with Neural Tangent Kernel
Rethinking Adversarial Training with Neural Tangent Kernel
Guanlin Li
Han Qiu
Shangwei Guo
Jiwei Li
Tianwei Zhang
AAML
22
0
0
04 Dec 2023
Understanding Unimodal Bias in Multimodal Deep Linear Networks
Understanding Unimodal Bias in Multimodal Deep Linear Networks
Yedi Zhang
Peter E. Latham
Andrew Saxe
26
5
0
01 Dec 2023
The Feature Speed Formula: a flexible approach to scale hyper-parameters
  of deep neural networks
The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks
Lénaic Chizat
Praneeth Netrapalli
18
4
0
30 Nov 2023
A Spectral Condition for Feature Learning
A Spectral Condition for Feature Learning
Greg Yang
James B. Simon
Jeremy Bernstein
22
25
0
26 Oct 2023
How connectivity structure shapes rich and lazy learning in neural
  circuits
How connectivity structure shapes rich and lazy learning in neural circuits
Yuhan Helena Liu
A. Baratin
Jonathan H. Cornford
Stefan Mihalas
E. Shea-Brown
Guillaume Lajoie
38
14
0
12 Oct 2023
An Adaptive Tangent Feature Perspective of Neural Networks
An Adaptive Tangent Feature Perspective of Neural Networks
Daniel LeJeune
Sina Alemohammad
11
1
0
29 Aug 2023
Catapults in SGD: spikes in the training loss and their impact on
  generalization through feature learning
Catapults in SGD: spikes in the training loss and their impact on generalization through feature learning
Libin Zhu
Chaoyue Liu
Adityanarayanan Radhakrishnan
M. Belkin
30
13
0
07 Jun 2023
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
How Two-Layer Neural Networks Learn, One (Giant) Step at a Time
Yatin Dandi
Florent Krzakala
Bruno Loureiro
Luca Pesce
Ludovic Stephan
MLT
34
25
0
29 May 2023
A Rainbow in Deep Network Black Boxes
A Rainbow in Deep Network Black Boxes
Florentin Guth
Brice Ménard
G. Rochette
S. Mallat
17
10
0
29 May 2023
Feature-Learning Networks Are Consistent Across Widths At Realistic
  Scales
Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
Nikhil Vyas
Alexander B. Atanasov
Blake Bordelon
Depen Morwani
Sabarish Sainathan
C. Pehlevan
24
22
0
28 May 2023
Neural (Tangent Kernel) Collapse
Neural (Tangent Kernel) Collapse
Mariia Seleznova
Dana Weitzner
Raja Giryes
Gitta Kutyniok
H. Chou
21
6
0
25 May 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
C. Pehlevan
MLT
38
29
0
06 Apr 2023
On the Stepwise Nature of Self-Supervised Learning
On the Stepwise Nature of Self-Supervised Learning
James B. Simon
Maksis Knutins
Liu Ziyin
Daniel Geisz
Abraham J. Fetterman
Joshua Albrecht
SSL
32
29
0
27 Mar 2023
TRAK: Attributing Model Behavior at Scale
TRAK: Attributing Model Behavior at Scale
Sung Min Park
Kristian Georgiev
Andrew Ilyas
Guillaume Leclerc
A. Madry
TDI
30
127
0
24 Mar 2023
Linear CNNs Discover the Statistical Structure of the Dataset Using Only
  the Most Dominant Frequencies
Linear CNNs Discover the Statistical Structure of the Dataset Using Only the Most Dominant Frequencies
Hannah Pinson
Joeri Lenaerts
V. Ginis
13
3
0
03 Mar 2023
Mechanism of feature learning in deep fully connected networks and
  kernel machines that recursively learn features
Mechanism of feature learning in deep fully connected networks and kernel machines that recursively learn features
Adityanarayanan Radhakrishnan
Daniel Beaglehole
Parthe Pandit
M. Belkin
FAtt
MLT
23
11
0
28 Dec 2022
The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich
  Regimes
The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes
Alexander B. Atanasov
Blake Bordelon
Sabarish Sainathan
C. Pehlevan
22
26
0
23 Dec 2022
Spectral Evolution and Invariance in Linear-width Neural Networks
Spectral Evolution and Invariance in Linear-width Neural Networks
Zhichao Wang
A. Engel
Anand D. Sarwate
Ioana Dumitriu
Tony Chiang
40
14
0
11 Nov 2022
Evolution of Neural Tangent Kernels under Benign and Adversarial
  Training
Evolution of Neural Tangent Kernels under Benign and Adversarial Training
Noel Loo
Ramin Hasani
Alexander Amini
Daniela Rus
AAML
28
13
0
21 Oct 2022
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
What Can the Neural Tangent Kernel Tell Us About Adversarial Robustness?
Nikolaos Tsilivis
Julia Kempe
AAML
39
16
0
11 Oct 2022
The Influence of Learning Rule on Representation Dynamics in Wide Neural
  Networks
The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks
Blake Bordelon
C. Pehlevan
41
22
0
05 Oct 2022
On Kernel Regression with Data-Dependent Kernels
On Kernel Regression with Data-Dependent Kernels
James B. Simon
BDL
13
3
0
04 Sep 2022
A view of mini-batch SGD via generating functions: conditions of
  convergence, phase transitions, benefit from negative momenta
A view of mini-batch SGD via generating functions: conditions of convergence, phase transitions, benefit from negative momenta
Maksim Velikanov
Denis Kuznedelev
Dmitry Yarotsky
9
8
0
22 Jun 2022
Limitations of the NTK for Understanding Generalization in Deep Learning
Limitations of the NTK for Understanding Generalization in Deep Learning
Nikhil Vyas
Yamini Bansal
Preetum Nakkiran
22
31
0
20 Jun 2022
Spectral Bias Outside the Training Set for Deep Networks in the Kernel
  Regime
Spectral Bias Outside the Training Set for Deep Networks in the Kernel Regime
Benjamin Bowman
Guido Montúfar
14
14
0
06 Jun 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
24
79
0
19 May 2022
12
Next