ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06969
  4. Cited By
Comparing Dynamics: Deep Neural Networks versus Glassy Systems

Comparing Dynamics: Deep Neural Networks versus Glassy Systems

19 March 2018
M. Baity-Jesi
Levent Sagun
Mario Geiger
S. Spigler
Gerard Ben Arous
C. Cammarota
Yann LeCun
M. Wyart
Giulio Biroli
    AI4CE
ArXivPDFHTML

Papers citing "Comparing Dynamics: Deep Neural Networks versus Glassy Systems"

50 / 68 papers shown
Title
High-dimensional manifold of solutions in neural networks: insights from statistical physics
High-dimensional manifold of solutions in neural networks: insights from statistical physics
Enrico M. Malatesta
46
4
0
20 Feb 2025
Disentangling and Mitigating the Impact of Task Similarity for Continual
  Learning
Disentangling and Mitigating the Impact of Task Similarity for Continual Learning
Naoki Hiratani
CLL
35
2
0
30 May 2024
From Zero to Hero: How local curvature at artless initial conditions
  leads away from bad minima
From Zero to Hero: How local curvature at artless initial conditions leads away from bad minima
Tony Bonnaire
Giulio Biroli
C. Cammarota
38
0
0
04 Mar 2024
The twin peaks of learning neural networks
The twin peaks of learning neural networks
Elizaveta Demyanenko
Christoph Feinauer
Enrico M. Malatesta
Luca Saglietti
16
0
0
23 Jan 2024
Temperature Balancing, Layer-wise Weight Analysis, and Neural Network
  Training
Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training
Yefan Zhou
Tianyu Pang
Keqin Liu
Charles H. Martin
Michael W. Mahoney
Yaoqing Yang
34
7
0
01 Dec 2023
On the Impact of Overparameterization on the Training of a Shallow
  Neural Network in High Dimensions
On the Impact of Overparameterization on the Training of a Shallow Neural Network in High Dimensions
Simon Martin
Francis Bach
Giulio Biroli
23
9
0
07 Nov 2023
Emergent learning in physical systems as feedback-based aging in a
  glassy landscape
Emergent learning in physical systems as feedback-based aging in a glassy landscape
Vidyesh Rao Anisetti
A. Kandala
J. M. Schwarz
AI4CE
14
4
0
08 Sep 2023
The semantic landscape paradigm for neural networks
The semantic landscape paradigm for neural networks
Shreyas Gokhale
21
2
0
18 Jul 2023
Black holes and the loss landscape in machine learning
Black holes and the loss landscape in machine learning
P. Kumar
Taniya Mandal
Swapnamay Mondal
28
2
0
26 Jun 2023
A Three-regime Model of Network Pruning
A Three-regime Model of Network Pruning
Yefan Zhou
Yaoqing Yang
Arin Chang
Michael W. Mahoney
29
10
0
28 May 2023
Evolutionary Algorithms in the Light of SGD: Limit Equivalence, Minima
  Flatness, and Transfer Learning
Evolutionary Algorithms in the Light of SGD: Limit Equivalence, Minima Flatness, and Transfer Learning
Andrei Kucharavy
R. Guerraoui
Ljiljana Dolamic
30
1
0
20 May 2023
Typical and atypical solutions in non-convex neural networks with
  discrete and continuous weights
Typical and atypical solutions in non-convex neural networks with discrete and continuous weights
Carlo Baldassi
Enrico M. Malatesta
Gabriele Perugini
R. Zecchina
MQ
37
11
0
26 Apr 2023
Tradeoff of generalization error in unsupervised learning
Tradeoff of generalization error in unsupervised learning
Gilhan Kim
Ho-Jun Lee
Junghyo Jo
Yongjoo Baek
13
0
0
10 Mar 2023
Universal characteristics of deep neural network loss surfaces from
  random matrix theory
Universal characteristics of deep neural network loss surfaces from random matrix theory
Nicholas P. Baskerville
J. Keating
F. Mezzadri
J. Najnudel
Diego Granziol
22
4
0
17 May 2022
Understanding out-of-distribution accuracies through quantifying
  difficulty of test samples
Understanding out-of-distribution accuracies through quantifying difficulty of test samples
Berfin Simsek
Melissa Hall
Levent Sagun
23
5
0
28 Mar 2022
Quantifying Relevance in Learning and Inference
Quantifying Relevance in Learning and Inference
M. Marsili
Y. Roudi
6
18
0
01 Feb 2022
Complexity from Adaptive-Symmetries Breaking: Global Minima in the
  Statistical Mechanics of Deep Neural Networks
Complexity from Adaptive-Symmetries Breaking: Global Minima in the Statistical Mechanics of Deep Neural Networks
Shaun Li
AI4CE
36
0
0
03 Jan 2022
Mode connectivity in the loss landscape of parameterized quantum
  circuits
Mode connectivity in the loss landscape of parameterized quantum circuits
Kathleen E. Hamilton
E. Lynn
R. Pooser
25
3
0
09 Nov 2021
Exponentially Many Local Minima in Quantum Neural Networks
Exponentially Many Local Minima in Quantum Neural Networks
Xuchen You
Xiaodi Wu
69
51
0
06 Oct 2021
Learning through atypical "phase transitions" in overparameterized
  neural networks
Learning through atypical "phase transitions" in overparameterized neural networks
Carlo Baldassi
Clarissa Lauditi
Enrico M. Malatesta
R. Pacelli
Gabriele Perugini
R. Zecchina
26
26
0
01 Oct 2021
Edge of chaos as a guiding principle for modern neural network training
Edge of chaos as a guiding principle for modern neural network training
Lin Zhang
Ling Feng
Kan Chen
C. Lai
16
9
0
20 Jul 2021
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations,
  and Anomalous Diffusion
The Limiting Dynamics of SGD: Modified Loss, Phase Space Oscillations, and Anomalous Diffusion
D. Kunin
Javier Sagastuy-Breña
Lauren Gillespie
Eshed Margalit
Hidenori Tanaka
Surya Ganguli
Daniel L. K. Yamins
31
15
0
19 Jul 2021
Continual Learning in the Teacher-Student Setup: Impact of Task
  Similarity
Continual Learning in the Teacher-Student Setup: Impact of Task Similarity
Sebastian Lee
Sebastian Goldt
Andrew M. Saxe
CLL
24
73
0
09 Jul 2021
Characterization of Generalizability of Spike Timing Dependent
  Plasticity trained Spiking Neural Networks
Characterization of Generalizability of Spike Timing Dependent Plasticity trained Spiking Neural Networks
Biswadeep Chakraborty
Saibal Mukhopadhyay
12
15
0
31 May 2021
Ensemble Inference Methods for Models With Noisy and Expensive
  Likelihoods
Ensemble Inference Methods for Models With Noisy and Expensive Likelihoods
Oliver R. A. Dunbar
Andrew B. Duncan
Andrew M. Stuart
Marie-Therese Wolfram
8
26
0
07 Apr 2021
A spin-glass model for the loss surfaces of generative adversarial
  networks
A spin-glass model for the loss surfaces of generative adversarial networks
Nicholas P. Baskerville
J. Keating
F. Mezzadri
J. Najnudel
GAN
28
12
0
07 Jan 2021
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature
  Learning and Lazy Training
Perspective: A Phase Diagram for Deep Learning unifying Jamming, Feature Learning and Lazy Training
Mario Geiger
Leonardo Petrini
M. Wyart
DRL
23
11
0
30 Dec 2020
Statistical Mechanics of Deep Linear Neural Networks: The
  Back-Propagating Kernel Renormalization
Statistical Mechanics of Deep Linear Neural Networks: The Back-Propagating Kernel Renormalization
Qianyi Li
H. Sompolinsky
16
69
0
07 Dec 2020
Align, then memorise: the dynamics of learning with feedback alignment
Align, then memorise: the dynamics of learning with feedback alignment
Maria Refinetti
Stéphane dÁscoli
Ruben Ohana
Sebastian Goldt
26
36
0
24 Nov 2020
Anomalous diffusion dynamics of learning in deep neural networks
Anomalous diffusion dynamics of learning in deep neural networks
Guozhang Chen
Chengqing Qu
P. Gong
19
21
0
22 Sep 2020
Low-loss connection of weight vectors: distribution-based approaches
Low-loss connection of weight vectors: distribution-based approaches
Ivan Anokhin
Dmitry Yarotsky
3DV
17
4
0
03 Aug 2020
Data-driven effective model shows a liquid-like deep learning
Data-driven effective model shows a liquid-like deep learning
Wenxuan Zou
Haiping Huang
24
2
0
16 Jul 2020
Is SGD a Bayesian sampler? Well, almost
Is SGD a Bayesian sampler? Well, almost
Chris Mingard
Guillermo Valle Pérez
Joar Skalse
A. Louis
BDL
13
51
0
26 Jun 2020
The Gaussian equivalence of generative models for learning with shallow
  neural networks
The Gaussian equivalence of generative models for learning with shallow neural networks
Sebastian Goldt
Bruno Loureiro
Galen Reeves
Florent Krzakala
M. Mézard
Lenka Zdeborová
BDL
41
100
0
25 Jun 2020
An analytic theory of shallow networks dynamics for hinge loss
  classification
An analytic theory of shallow networks dynamics for hinge loss classification
Franco Pellegrini
Giulio Biroli
24
19
0
19 Jun 2020
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks
Umut Simsekli
Ozan Sener
George Deligiannidis
Murat A. Erdogdu
41
55
0
16 Jun 2020
Is deeper better? It depends on locality of relevant features
Is deeper better? It depends on locality of relevant features
Takashi Mori
Masahito Ueda
OOD
15
4
0
26 May 2020
The Loss Surfaces of Neural Networks with General Activation Functions
The Loss Surfaces of Neural Networks with General Activation Functions
Nicholas P. Baskerville
J. Keating
F. Mezzadri
J. Najnudel
ODL
AI4CE
9
26
0
08 Apr 2020
Optimization for deep learning: theory and algorithms
Optimization for deep learning: theory and algorithms
Ruoyu Sun
ODL
14
168
0
19 Dec 2019
Rademacher complexity and spin glasses: A link between the replica and
  statistical theories of learning
Rademacher complexity and spin glasses: A link between the replica and statistical theories of learning
A. Abbara
Benjamin Aubin
Florent Krzakala
Lenka Zdeborová
22
13
0
05 Dec 2019
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep
  Neural Networks
On the Heavy-Tailed Theory of Stochastic Gradient Descent for Deep Neural Networks
Umut Simsekli
Mert Gurbuzbalaban
T. H. Nguyen
G. Richard
Levent Sagun
15
55
0
29 Nov 2019
Mean-field inference methods for neural networks
Mean-field inference methods for neural networks
Marylou Gabrié
AI4CE
16
33
0
03 Nov 2019
Generalization in multitask deep neural classifiers: a statistical
  physics approach
Generalization in multitask deep neural classifiers: a statistical physics approach
Tyler Lee
A. Ndirango
AI4CE
19
20
0
30 Oct 2019
From complex to simple : hierarchical free-energy landscape renormalized
  in deep neural networks
From complex to simple : hierarchical free-energy landscape renormalized in deep neural networks
H. Yoshino
14
6
0
22 Oct 2019
The asymptotic spectrum of the Hessian of DNN throughout training
The asymptotic spectrum of the Hessian of DNN throughout training
Arthur Jacot
Franck Gabriel
Clément Hongler
11
35
0
01 Oct 2019
Maximal Relevance and Optimal Learning Machines
Maximal Relevance and Optimal Learning Machines
O. Duranthon
M. Marsili
R Xie
11
0
0
27 Sep 2019
Weight-space symmetry in deep networks gives rise to permutation
  saddles, connected by equal-loss valleys across the loss landscape
Weight-space symmetry in deep networks gives rise to permutation saddles, connected by equal-loss valleys across the loss landscape
Johanni Brea
Berfin Simsek
Bernd Illing
W. Gerstner
15
55
0
05 Jul 2019
Disentangling feature and lazy training in deep neural networks
Disentangling feature and lazy training in deep neural networks
Mario Geiger
S. Spigler
Arthur Jacot
M. Wyart
13
17
0
19 Jun 2019
Dynamics of stochastic gradient descent for two-layer neural networks in
  the teacher-student setup
Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup
Sebastian Goldt
Madhu S. Advani
Andrew M. Saxe
Florent Krzakala
Lenka Zdeborová
MLT
19
140
0
18 Jun 2019
Luck Matters: Understanding Training Dynamics of Deep ReLU Networks
Luck Matters: Understanding Training Dynamics of Deep ReLU Networks
Yuandong Tian
Tina Jiang
Qucheng Gong
Ari S. Morcos
11
24
0
31 May 2019
12
Next