ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1906.02351
  4. Cited By
On the Convergence of SARAH and Beyond
v1v2 (latest)

On the Convergence of SARAH and Beyond

International Conference on Artificial Intelligence and Statistics (AISTATS), 2019
5 June 2019
Bingcong Li
Meng Ma
G. Giannakis
ArXiv (abs)PDFHTML

Papers citing "On the Convergence of SARAH and Beyond"

15 / 15 papers shown
Title
VASSO: Variance Suppression for Sharpness-Aware Minimization
Bingcong Li
Yilang Zhang
G. Giannakis
216
1
0
02 Sep 2025
Convergence Analysis of the PAGE Stochastic Algorithm for Weakly Convex Finite-Sum Optimization
Convergence Analysis of the PAGE Stochastic Algorithm for Weakly Convex Finite-Sum Optimization
Laurent Condat
Peter Richtárik
84
0
0
31 Aug 2025
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
Duc Toan Nguyen
Trang H. Tran
Lam M. Nguyen
101
0
0
14 Jun 2025
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
357
1
0
21 Feb 2025
Enhancing Sharpness-Aware Optimization Through Variance Suppression
Enhancing Sharpness-Aware Optimization Through Variance SuppressionNeural Information Processing Systems (NeurIPS), 2023
Bingcong Li
G. Giannakis
AAML
336
32
0
27 Sep 2023
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and AnalysisNeural Information Processing Systems (NeurIPS), 2023
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
287
13
0
15 Apr 2023
Gradient Descent-Type Methods: Background and Simple Unified Convergence
  Analysis
Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis
Quoc Tran-Dinh
Marten van Dijk
143
1
0
19 Dec 2022
Faster federated optimization under second-order similarity
Faster federated optimization under second-order similarityInternational Conference on Learning Representations (ICLR), 2022
Ahmed Khaled
Chi Jin
FedML
256
23
0
06 Sep 2022
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computationsOptimization Letters (Optim. Lett.), 2021
Aleksandr Beznosikov
Martin Takáč
217
11
0
26 Nov 2021
Practical Schemes for Finding Near-Stationary Points of Convex
  Finite-Sums
Practical Schemes for Finding Near-Stationary Points of Convex Finite-SumsInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Kaiwen Zhou
Lai Tian
Anthony Man-Cho So
James Cheng
176
10
0
25 May 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex OptimizationInternational Conference on Machine Learning (ICML), 2020
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
288
149
0
25 Aug 2020
Variance Reduction for Deep Q-Learning using Stochastic Recursive
  Gradient
Variance Reduction for Deep Q-Learning using Stochastic Recursive GradientInternational Conference on Neural Information Processing (ICONIP), 2020
Hao Jia
Xiao Zhang
Jun Xu
Wei Zeng
Hao Jiang
Xiao Yan
Ji-Rong Wen
150
3
0
25 Jul 2020
Communication-Efficient Robust Federated Learning Over Heterogeneous
  Datasets
Communication-Efficient Robust Federated Learning Over Heterogeneous Datasets
Yanjie Dong
G. Giannakis
Tianyi Chen
Julian Cheng
Md. Jahangir Hossain
Victor C. M. Leung
FedML
125
14
0
17 Jun 2020
Adaptive Step Sizes in Variance Reduction via Regularization
Adaptive Step Sizes in Variance Reduction via Regularization
Bingcong Li
G. Giannakis
114
5
0
15 Oct 2019
Almost Tune-Free Variance Reduction
Almost Tune-Free Variance ReductionInternational Conference on Machine Learning (ICML), 2019
Bingcong Li
Lingda Wang
G. Giannakis
138
20
0
25 Aug 2019
1