Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2102.09393
Cited By
On the Convergence of Step Decay Step-Size for Stochastic Optimization
18 February 2021
Xiaoyu Wang
Sindri Magnússon
M. Johansson
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Convergence of Step Decay Step-Size for Stochastic Optimization"
6 / 6 papers shown
Title
Increasing Both Batch Size and Learning Rate Accelerates Stochastic Gradient Descent
Hikaru Umeda
Hideaki Iiduka
59
2
0
17 Feb 2025
Provably Scalable Black-Box Variational Inference with Structured Variational Families
Joohwan Ko
Kyurae Kim
W. Kim
Jacob R. Gardner
BDL
17
1
0
19 Jan 2024
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Aritra Dutta
El Houcine Bergou
Soumia Boucherouite
Nicklas Werge
M. Kandemir
Xin Li
18
0
0
19 Oct 2023
Two Sides of One Coin: the Limits of Untuned SGD and the Power of Adaptive Methods
Junchi Yang
Xiang Li
Ilyas Fatkhullin
Niao He
23
15
0
21 May 2023
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
111
259
0
10 Dec 2012
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
570
0
08 Dec 2012
1