ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.06444
  4. Cited By
Unreasonable Effectiveness of Learning Neural Networks: From Accessible
  States and Robust Ensembles to Basic Algorithmic Schemes
v1v2v3 (latest)

Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes

20 May 2016
Carlo Baldassi
C. Borgs
J. Chayes
Alessandro Ingrosso
Carlo Lucibello
Luca Saglietti
R. Zecchina
ArXiv (abs)PDFHTML

Papers citing "Unreasonable Effectiveness of Learning Neural Networks: From Accessible States and Robust Ensembles to Basic Algorithmic Schemes"

18 / 68 papers shown
Title
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Comparing Dynamics: Deep Neural Networks versus Glassy Systems
Marco Baity-Jesi
Levent Sagun
Mario Geiger
S. Spigler
Gerard Ben Arous
C. Cammarota
Yann LeCun
Matthieu Wyart
Giulio Biroli
AI4CE
174
117
0
19 Mar 2018
Energy-entropy competition and the effectiveness of stochastic gradient
  descent in machine learning
Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning
Yao Zhang
Andrew M. Saxe
Madhu S. Advani
A. Lee
111
60
0
05 Mar 2018
An Optimal Control Approach to Deep Learning and Applications to
  Discrete-Weight Neural Networks
An Optimal Control Approach to Deep Learning and Applications to Discrete-Weight Neural Networks
Qianxiao Li
Shuji Hao
138
77
0
04 Mar 2018
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization
  properties of Entropy-SGD and data-dependent priors
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors
Gintare Karolina Dziugaite
Daniel M. Roy
MLT
147
145
0
26 Dec 2017
A trans-disciplinary review of deep learning research for water
  resources scientists
A trans-disciplinary review of deep learning research for water resources scientists
Chaopeng Shen
AI4CE
299
725
0
06 Dec 2017
Stochastic gradient descent performs variational inference, converges to
  limit cycles for deep networks
Stochastic gradient descent performs variational inference, converges to limit cycles for deep networks
Pratik Chaudhari
Stefano Soatto
MLT
164
309
0
30 Oct 2017
On the role of synaptic stochasticity in training low-precision neural
  networks
On the role of synaptic stochasticity in training low-precision neural networks
Carlo Baldassi
Federica Gerace
H. Kappen
Carlo Lucibello
Luca Saglietti
Enzo Tartaglione
R. Zecchina
119
23
0
26 Oct 2017
Optimal Errors and Phase Transitions in High-Dimensional Generalized
  Linear Models
Optimal Errors and Phase Transitions in High-Dimensional Generalized Linear Models
Jean Barbier
Florent Krzakala
N. Macris
Léo Miolane
Lenka Zdeborová
182
278
0
10 Aug 2017
Parle: parallelizing stochastic gradient descent
Parle: parallelizing stochastic gradient descent
Pratik Chaudhari
Carlo Baldassi
R. Zecchina
Stefano Soatto
Ameet Talwalkar
Adam M. Oberman
ODLFedML
111
23
0
03 Jul 2017
Towards Understanding Generalization of Deep Learning: Perspective of
  Loss Landscapes
Towards Understanding Generalization of Deep Learning: Perspective of Loss Landscapes
Lei Wu
Zhanxing Zhu
E. Weinan
ODL
133
222
0
30 Jun 2017
Efficiency of quantum versus classical annealing in non-convex learning
  problems
Efficiency of quantum versus classical annealing in non-convex learning problems
Carlo Baldassi
R. Zecchina
144
48
0
26 Jun 2017
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Empirical Analysis of the Hessian of Over-Parametrized Neural Networks
Levent Sagun
Utku Evci
V. U. Güney
Yann N. Dauphin
Léon Bottou
191
425
0
14 Jun 2017
A General Theory for Training Learning Machine
A General Theory for Training Learning Machine
Hong Zhao
AI4CE
28
3
0
23 Apr 2017
Deep Relaxation: partial differential equations for optimizing deep
  neural networks
Deep Relaxation: partial differential equations for optimizing deep neural networks
Pratik Chaudhari
Adam M. Oberman
Stanley Osher
Stefano Soatto
G. Carlier
200
155
0
17 Apr 2017
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural
  Networks with Many More Parameters than Training Data
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
Gintare Karolina Dziugaite
Daniel M. Roy
267
846
0
31 Mar 2017
Reinforced stochastic gradient descent for deep neural network learning
Reinforced stochastic gradient descent for deep neural network learning
Haiping Huang
Taro Toyoizumi
ODL
73
1
0
27 Jan 2017
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Entropy-SGD: Biasing Gradient Descent Into Wide Valleys
Pratik Chaudhari
A. Choromańska
Stefano Soatto
Yann LeCun
Carlo Baldassi
C. Borgs
J. Chayes
Levent Sagun
R. Zecchina
ODL
297
802
0
06 Nov 2016
On the energy landscape of deep networks
On the energy landscape of deep networks
Pratik Chaudhari
Stefano Soatto
ODL
191
27
0
20 Nov 2015
Previous
12