Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
All Papers
0 / 0 papers shown
Title
Home
Papers
1906.02351
Cited By
v1
v2 (latest)
On the Convergence of SARAH and Beyond
International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
5 June 2019
Bingcong Li
Meng Ma
G. Giannakis
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"On the Convergence of SARAH and Beyond"
15 / 15 papers shown
Title
VASSO: Variance Suppression for Sharpness-Aware Minimization
Bingcong Li
Yilang Zhang
G. Giannakis
212
1
0
02 Sep 2025
Convergence Analysis of the PAGE Stochastic Algorithm for Weakly Convex Finite-Sum Optimization
Laurent Condat
Peter Richtárik
84
0
0
31 Aug 2025
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
Duc Toan Nguyen
Trang H. Tran
Lam M. Nguyen
101
0
0
14 Jun 2025
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
357
1
0
21 Feb 2025
Enhancing Sharpness-Aware Optimization Through Variance Suppression
Neural Information Processing Systems (NeurIPS), 2023
Bingcong Li
G. Giannakis
AAML
320
31
0
27 Sep 2023
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Neural Information Processing Systems (NeurIPS), 2023
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
287
13
0
15 Apr 2023
Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis
Quoc Tran-Dinh
Marten van Dijk
135
1
0
19 Dec 2022
Faster federated optimization under second-order similarity
International Conference on Learning Representations (ICLR), 2022
Ahmed Khaled
Chi Jin
FedML
240
23
0
06 Sep 2022
Random-reshuffled SARAH does not need a full gradient computations
Optimization Letters (Optim. Lett.), 2021
Aleksandr Beznosikov
Martin Takáč
213
11
0
26 Nov 2021
Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
172
10
0
25 May 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
International Conference on Machine Learning (ICML), 2020
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
288
149
0
25 Aug 2020
Variance Reduction for Deep Q-Learning using Stochastic Recursive Gradient
International Conference on Neural Information Processing (ICONIP), 2020
Hao Jia
Xiao Zhang
Jun Xu
Wei Zeng
Hao Jiang
Xiao Yan
Ji-Rong Wen
150
3
0
25 Jul 2020
Communication-Efficient Robust Federated Learning Over Heterogeneous Datasets
Yanjie Dong
G. Giannakis
Tianyi Chen
Julian Cheng
Md. Jahangir Hossain
Victor C. M. Leung
FedML
125
14
0
17 Jun 2020
Adaptive Step Sizes in Variance Reduction via Regularization
Bingcong Li
G. Giannakis
114
5
0
15 Oct 2019
Almost Tune-Free Variance Reduction
International Conference on Machine Learning (ICML), 2019
Bingcong Li
Lingda Wang
G. Giannakis
138
20
0
25 Aug 2019
1