ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.04030
  4. Cited By
Statistical Mechanics of Deep Linear Neural Networks: The
  Back-Propagating Kernel Renormalization

Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization

7 December 2020
Qianyi Li
H. Sompolinsky
ArXivPDFHTML

Papers citing "Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization"

19 / 19 papers shown
Title
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime
Francesco Camilli
D. Tieplova
Eleonora Bergamin
Jean Barbier
109
0
0
06 May 2025
Deep Neural Nets as Hamiltonians
Deep Neural Nets as Hamiltonians
Mike Winer
Boris Hanin
130
0
0
31 Mar 2025
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Deep Linear Network Training Dynamics from Random Initialization: Data, Width, Depth, and Hyperparameter Transfer
Blake Bordelon
C. Pehlevan
AI4CE
64
1
0
04 Feb 2025
Exact full-RSB SAT/UNSAT transition in infinitely wide two-layer neural networks
Exact full-RSB SAT/UNSAT transition in infinitely wide two-layer neural networks
B. Annesi
Enrico M. Malatesta
Francesco Zamponi
38
2
0
09 Oct 2024
Bayesian RG Flow in Neural Network Field Theories
Bayesian RG Flow in Neural Network Field Theories
Jessica N. Howard
Marc S. Klinger
Anindita Maiti
A. G. Stapleton
68
1
0
27 May 2024
Dissecting the Interplay of Attention Paths in a Statistical Mechanics
  Theory of Transformers
Dissecting the Interplay of Attention Paths in a Statistical Mechanics Theory of Transformers
Lorenzo Tiberi
Francesca Mignacco
Kazuki Irie
H. Sompolinsky
42
6
0
24 May 2024
A Short Review on Novel Approaches for Maximum Clique Problem: from
  Classical algorithms to Graph Neural Networks and Quantum algorithms
A Short Review on Novel Approaches for Maximum Clique Problem: from Classical algorithms to Graph Neural Networks and Quantum algorithms
Raffaele Marino
L. Buffoni
Bogdan Zavalnij
GNN
37
5
0
13 Mar 2024
Grokking as a First Order Phase Transition in Two Layer Networks
Grokking as a First Order Phase Transition in Two Layer Networks
Noa Rubin
Inbar Seroussi
Z. Ringel
37
15
0
05 Oct 2023
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Connecting NTK and NNGP: A Unified Theoretical Framework for Wide Neural Network Learning Dynamics
Yehonatan Avidan
Qianyi Li
H. Sompolinsky
60
8
0
08 Sep 2023
Quantitative CLTs in Deep Neural Networks
Quantitative CLTs in Deep Neural Networks
Stefano Favaro
Boris Hanin
Domenico Marinucci
I. Nourdin
G. Peccati
BDL
23
11
0
12 Jul 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
C. Pehlevan
MLT
38
29
0
06 Apr 2023
Online Learning for the Random Feature Model in the Student-Teacher
  Framework
Online Learning for the Random Feature Model in the Student-Teacher Framework
Roman Worschech
B. Rosenow
41
0
0
24 Mar 2023
Statistical Physics of Deep Neural Networks: Initialization toward
  Optimal Channels
Statistical Physics of Deep Neural Networks: Initialization toward Optimal Channels
Kangyu Weng
Aohua Cheng
Ziyang Zhang
Pei Sun
Yang Tian
48
2
0
04 Dec 2022
Globally Gated Deep Linear Networks
Globally Gated Deep Linear Networks
Qianyi Li
H. Sompolinsky
AI4CE
14
10
0
31 Oct 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
29
79
0
19 May 2022
Contrasting random and learned features in deep Bayesian linear
  regression
Contrasting random and learned features in deep Bayesian linear regression
Jacob A. Zavatone-Veth
William L. Tong
C. Pehlevan
BDL
MLT
28
26
0
01 Mar 2022
Separation of Scales and a Thermodynamic Description of Feature Learning
  in Some CNNs
Separation of Scales and a Thermodynamic Description of Feature Learning in Some CNNs
Inbar Seroussi
Gadi Naveh
Z. Ringel
30
50
0
31 Dec 2021
Unified field theoretical approach to deep and recurrent neuronal
  networks
Unified field theoretical approach to deep and recurrent neuronal networks
Kai Segadlo
Bastian Epping
Alexander van Meegen
David Dahmen
Michael Krämer
M. Helias
AI4CE
BDL
28
20
0
10 Dec 2021
Depth induces scale-averaging in overparameterized linear Bayesian
  neural networks
Depth induces scale-averaging in overparameterized linear Bayesian neural networks
Jacob A. Zavatone-Veth
C. Pehlevan
BDL
UQCV
MDE
36
8
0
23 Nov 2021
1