ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06969
  4. Cited By
Comparing Dynamics: Deep Neural Networks versus Glassy Systems

Comparing Dynamics: Deep Neural Networks versus Glassy Systems

19 March 2018
M. Baity-Jesi
Levent Sagun
Mario Geiger
S. Spigler
Gerard Ben Arous
C. Cammarota
Yann LeCun
M. Wyart
Giulio Biroli
    AI4CE
ArXivPDFHTML

Papers citing "Comparing Dynamics: Deep Neural Networks versus Glassy Systems"

18 / 68 papers shown
Title
Meta-learners' learning dynamics are unlike learners'
Meta-learners' learning dynamics are unlike learners'
Neil C. Rabinowitz
OffRL
23
16
0
03 May 2019
Novel Uncertainty Framework for Deep Learning Ensembles
Novel Uncertainty Framework for Deep Learning Ensembles
Tal Kachman
Michal Moshkovitz
Michal Rosen-Zvi
UQCV
OOD
BDL
26
3
0
09 Apr 2019
Generalisation dynamics of online learning in over-parameterised neural
  networks
Generalisation dynamics of online learning in over-parameterised neural networks
Sebastian Goldt
Madhu S. Advani
Andrew M. Saxe
Florent Krzakala
Lenka Zdeborová
25
14
0
25 Jan 2019
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural
  Networks
A Tail-Index Analysis of Stochastic Gradient Noise in Deep Neural Networks
Umut Simsekli
Levent Sagun
Mert Gurbuzbalaban
17
237
0
18 Jan 2019
Scaling description of generalization with number of parameters in deep
  learning
Scaling description of generalization with number of parameters in deep learning
Mario Geiger
Arthur Jacot
S. Spigler
Franck Gabriel
Levent Sagun
Stéphane dÁscoli
Giulio Biroli
Clément Hongler
M. Wyart
44
195
0
06 Jan 2019
Dreaming neural networks: rigorous results
Dreaming neural networks: rigorous results
E. Agliari
Francesco Alemanno
Adriano Barra
A. Fachechi
CLL
21
25
0
21 Dec 2018
Deep learning for pedestrians: backpropagation in CNNs
Deep learning for pedestrians: backpropagation in CNNs
L. Boué
3DV
PINN
13
4
0
29 Nov 2018
A jamming transition from under- to over-parametrization affects loss
  landscape and generalization
A jamming transition from under- to over-parametrization affects loss landscape and generalization
S. Spigler
Mario Geiger
Stéphane dÁscoli
Levent Sagun
Giulio Biroli
M. Wyart
25
151
0
22 Oct 2018
Implicit Self-Regularization in Deep Neural Networks: Evidence from
  Random Matrix Theory and Implications for Learning
Implicit Self-Regularization in Deep Neural Networks: Evidence from Random Matrix Theory and Implications for Learning
Charles H. Martin
Michael W. Mahoney
AI4CE
30
190
0
02 Oct 2018
The jamming transition as a paradigm to understand the loss landscape of
  deep neural networks
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
Mario Geiger
S. Spigler
Stéphane dÁscoli
Levent Sagun
M. Baity-Jesi
Giulio Biroli
M. Wyart
19
141
0
25 Sep 2018
PCA of high dimensional random walks with comparison to neural network
  training
PCA of high dimensional random walks with comparison to neural network training
J. Antognini
Jascha Narain Sohl-Dickstein
OOD
19
27
0
22 Jun 2018
The committee machine: Computational to statistical gaps in learning a
  two-layers neural network
The committee machine: Computational to statistical gaps in learning a two-layers neural network
Benjamin Aubin
Antoine Maillard
Jean Barbier
Florent Krzakala
N. Macris
Lenka Zdeborová
36
104
0
14 Jun 2018
Input and Weight Space Smoothing for Semi-supervised Learning
Input and Weight Space Smoothing for Semi-supervised Learning
Safa Cicek
Stefano Soatto
19
6
0
23 May 2018
Monotone Learning with Rectified Wire Networks
Monotone Learning with Rectified Wire Networks
V. Elser
Dan Schmidt
J. Yedidia
14
0
0
10 May 2018
Trainability and Accuracy of Neural Networks: An Interacting Particle
  System Approach
Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach
Grant M. Rotskoff
Eric Vanden-Eijnden
59
118
0
02 May 2018
A high-bias, low-variance introduction to Machine Learning for
  physicists
A high-bias, low-variance introduction to Machine Learning for physicists
Pankaj Mehta
Marin Bukov
Ching-Hao Wang
A. G. Day
C. Richardson
Charles K. Fisher
D. Schwab
AI4CE
16
866
0
23 Mar 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
The Loss Surfaces of Multilayer Networks
The Loss Surfaces of Multilayer Networks
A. Choromańska
Mikael Henaff
Michaël Mathieu
Gerard Ben Arous
Yann LeCun
ODL
179
1,185
0
30 Nov 2014
Previous
12