Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2109.14119
Cited By
Stochastic Training is Not Necessary for Generalization
29 September 2021
Jonas Geiping
Micah Goldblum
Phillip E. Pope
Michael Moeller
Tom Goldstein
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Stochastic Training is Not Necessary for Generalization"
5 / 5 papers shown
Title
Gradient Descent as a Shrinkage Operator for Spectral Bias
Simon Lucey
22
0
0
25 Apr 2025
Neural Redshift: Random Networks are not Random Functions
Damien Teney
A. Nicolicioiu
Valentin Hartmann
Ehsan Abbasnejad
75
18
0
04 Mar 2024
The large learning rate phase of deep learning: the catapult mechanism
Aitor Lewkowycz
Yasaman Bahri
Ethan Dyer
Jascha Narain Sohl-Dickstein
Guy Gur-Ari
ODL
140
198
0
04 Mar 2020
Bag of Tricks for Image Classification with Convolutional Neural Networks
Tong He
Zhi-Li Zhang
Hang Zhang
Zhongyue Zhang
Junyuan Xie
Mu Li
197
1,275
0
04 Dec 2018
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
270
2,696
0
15 Sep 2016
1