Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1803.01927
Cited By
Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning
5 March 2018
Yao Zhang
Andrew M. Saxe
Madhu S. Advani
A. Lee
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Energy-entropy competition and the effectiveness of stochastic gradient descent in machine learning"
7 / 7 papers shown
Title
The Interpolating Information Criterion for Overparameterized Models
Liam Hodgkinson
Christopher van der Heide
Roberto Salomone
Fred Roosta
Michael W. Mahoney
20
7
0
15 Jul 2023
Correlated Noise in Epoch-Based Stochastic Gradient Descent: Implications for Weight Variances
Marcel Kühn
B. Rosenow
21
3
0
08 Jun 2023
Dissecting the Effects of SGD Noise in Distinct Regimes of Deep Learning
Antonio Sclocchi
Mario Geiger
M. Wyart
40
6
0
31 Jan 2023
Deep Learning is Singular, and That's Good
Daniel Murfet
Susan Wei
Biwei Huang
Hui Li
Jesse Gell-Redman
T. Quella
UQCV
24
26
0
22 Oct 2020
Dimensionality compression and expansion in Deep Neural Networks
Stefano Recanatesi
M. Farrell
Madhu S. Advani
Timothy Moore
Guillaume Lajoie
E. Shea-Brown
26
72
0
02 Jun 2019
Deep learning generalizes because the parameter-function map is biased towards simple functions
Guillermo Valle Pérez
Chico Q. Camargo
A. Louis
MLT
AI4CE
18
226
0
22 May 2018
Three Factors Influencing Minima in SGD
Stanislaw Jastrzebski
Zachary Kenton
Devansh Arpit
Nicolas Ballas
Asja Fischer
Yoshua Bengio
Amos Storkey
42
457
0
13 Nov 2017
1