Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1908.10525
Cited By
Linear Convergence of Adaptive Stochastic Gradient Descent
28 August 2019
Yuege Xie
Xiaoxia Wu
Rachel A. Ward
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Linear Convergence of Adaptive Stochastic Gradient Descent"
11 / 11 papers shown
Title
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
Sayantan Choudhury
N. Tupitsa
Nicolas Loizou
Samuel Horváth
Martin Takáč
Eduard A. Gorbunov
43
1
0
05 Mar 2024
Relaxing the Additivity Constraints in Decentralized No-Regret High-Dimensional Bayesian Optimization
Anthony Bardou
Patrick Thiran
Thomas Begin
24
4
0
31 May 2023
TiAda: A Time-scale Adaptive Algorithm for Nonconvex Minimax Optimization
Xiang Li
Junchi Yang
Niao He
34
8
0
31 Oct 2022
Accelerating SGD for Highly Ill-Conditioned Huge-Scale Online Matrix Completion
G. Zhang
Hong-Ming Chiu
Richard Y. Zhang
27
10
0
24 Aug 2022
Improved Policy Optimization for Online Imitation Learning
J. Lavington
Sharan Vaswani
Mark Schmidt
OffRL
28
6
0
29 Jul 2022
Nest Your Adaptive Algorithm for Parameter-Agnostic Nonconvex Minimax Optimization
Junchi Yang
Xiang Li
Niao He
ODL
45
22
0
01 Jun 2022
Optimal Algorithms for Stochastic Multi-Level Compositional Optimization
Wei Jiang
Bokun Wang
Yibo Wang
Lijun Zhang
Tianbao Yang
81
17
0
15 Feb 2022
Stochastic gradient descent with noise of machine learning type. Part I: Discrete time analysis
Stephan Wojtowytsch
25
50
0
04 May 2021
Convergence of stochastic gradient descent schemes for Lojasiewicz-landscapes
Steffen Dereich
Sebastian Kassing
34
27
0
16 Feb 2021
Sequential convergence of AdaGrad algorithm for smooth convex optimization
Cheik Traoré
Edouard Pauwels
14
21
0
24 Nov 2020
A Qualitative Study of the Dynamic Behavior for Adaptive Gradient Algorithms
Chao Ma
Lei Wu
E. Weinan
ODL
19
23
0
14 Sep 2020
1