ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.11749
  4. Cited By
Feature Learning and Signal Propagation in Deep Neural Networks

Feature Learning and Signal Propagation in Deep Neural Networks

22 October 2021
Yizhang Lou
Chris Mingard
Yoonsoo Nam
Soufiane Hayou
    MDE
ArXivPDFHTML

Papers citing "Feature Learning and Signal Propagation in Deep Neural Networks"

16 / 16 papers shown
Title
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networks
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networks
Jim Zhao
Sidak Pal Singh
Aurelien Lucchi
AI4CE
45
0
0
04 Nov 2024
A spring-block theory of feature learning in deep neural networks
A spring-block theory of feature learning in deep neural networks
Chengzhi Shi
Liming Pan
Ivan Dokmanić
AI4CE
40
1
0
28 Jul 2024
Understanding and Minimising Outlier Features in Neural Network Training
Understanding and Minimising Outlier Features in Neural Network Training
Bobby He
Lorenzo Noci
Daniele Paliotta
Imanol Schlag
Thomas Hofmann
36
3
0
29 May 2024
The Feature Speed Formula: a flexible approach to scale hyper-parameters
  of deep neural networks
The Feature Speed Formula: a flexible approach to scale hyper-parameters of deep neural networks
Lénaic Chizat
Praneeth Netrapalli
18
4
0
30 Nov 2023
Commutative Width and Depth Scaling in Deep Neural Networks
Commutative Width and Depth Scaling in Deep Neural Networks
Soufiane Hayou
41
2
0
02 Oct 2023
Leave-one-out Distinguishability in Machine Learning
Leave-one-out Distinguishability in Machine Learning
Jiayuan Ye
Anastasia Borovykh
Soufiane Hayou
Reza Shokri
39
9
0
29 Sep 2023
The Tunnel Effect: Building Data Representations in Deep Neural Networks
The Tunnel Effect: Building Data Representations in Deep Neural Networks
Wojciech Masarczyk
M. Ostaszewski
Ehsan Imani
Razvan Pascanu
Piotr Milo's
Tomasz Trzciñski
28
18
0
31 May 2023
Neural (Tangent Kernel) Collapse
Neural (Tangent Kernel) Collapse
Mariia Seleznova
Dana Weitzner
Raja Giryes
Gitta Kutyniok
H. Chou
23
6
0
25 May 2023
Do deep neural networks have an inbuilt Occam's razor?
Do deep neural networks have an inbuilt Occam's razor?
Chris Mingard
Henry Rees
Guillermo Valle Pérez
A. Louis
UQCV
BDL
21
16
0
13 Apr 2023
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
  Field Neural Networks
Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Blake Bordelon
C. Pehlevan
MLT
38
29
0
06 Apr 2023
Width and Depth Limits Commute in Residual Networks
Width and Depth Limits Commute in Residual Networks
Soufiane Hayou
Greg Yang
42
14
0
01 Feb 2023
The Influence of Learning Rule on Representation Dynamics in Wide Neural
  Networks
The Influence of Learning Rule on Representation Dynamics in Wide Neural Networks
Blake Bordelon
C. Pehlevan
41
22
0
05 Oct 2022
On the infinite-depth limit of finite-width neural networks
On the infinite-depth limit of finite-width neural networks
Soufiane Hayou
22
22
0
03 Oct 2022
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide
  Neural Networks
Self-Consistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks
Blake Bordelon
C. Pehlevan
MLT
32
79
0
19 May 2022
Stochastic Training is Not Necessary for Generalization
Stochastic Training is Not Necessary for Generalization
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
89
72
0
29 Sep 2021
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
284
2,889
0
15 Sep 2016
1