Papers
Communities
Organizations
Events
Blog
Pricing
Feedback
Contact Sales
Search
Open menu
Home
Papers
1904.09265
Cited By
v1
v2 (latest)
SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points
19 April 2019
Zhize Li
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"SSRGD: Simple Stochastic Recursive Gradient Descent for Escaping Saddle Points"
28 / 28 papers shown
Title
Hessian-guided Perturbed Wasserstein Gradient Flows for Escaping Saddle Points
Naoya Yamamoto
Juno Kim
Taiji Suzuki
12
0
0
21 Sep 2025
Second-Order Convergence in Private Stochastic Non-Convex Optimization
Youming Tao
Zuyuan Zhang
Dongxiao Yu
Xiuzhen Cheng
Falko Dressler
Di Wang
98
2
0
21 May 2025
Stochastic First-Order Methods with Non-smooth and Non-Euclidean Proximal Terms for Nonconvex High-Dimensional Stochastic Optimization
Yue Xie
Jiawen Bi
Hongcheng Liu
94
0
0
27 Jun 2024
Probabilistic Guarantees of Stochastic Recursive Gradient in Non-Convex Finite Sum Problems
Yanjie Zhong
Jiaqi Li
Soumendra Lahiri
83
1
0
29 Jan 2024
Escaping Saddle Points in Heterogeneous Federated Learning via Distributed SGD with Communication Compression
Sijin Chen
Zhize Li
Yuejie Chi
FedML
117
5
0
29 Oct 2023
Faster Gradient-Free Algorithms for Nonsmooth Nonconvex Stochastic Optimization
Le‐Yu Chen
Jing Xu
Luo Luo
130
21
0
16 Jan 2023
Versatile Single-Loop Method for Gradient Estimator: First and Second Order Optimality, and its Application to Federated Learning
Kazusato Oko
Shunta Akiyama
Tomoya Murata
Taiji Suzuki
FedML
120
0
0
01 Sep 2022
Simple and Optimal Stochastic Gradient Methods for Nonsmooth Nonconvex Optimization
Zhize Li
Jian Li
147
8
0
22 Aug 2022
Anchor Sampling for Federated Learning with Partial Client Participation
Feijie Wu
Song Guo
Zhihao Qu
Shiqi He
Ziming Liu
Jing Gao
FedML
129
20
0
13 Jun 2022
Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization
Yuri Kinoshita
Taiji Suzuki
142
18
0
30 Mar 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
125
8
0
18 Feb 2022
Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning
Tomoya Murata
Taiji Suzuki
FedML
107
3
0
12 Feb 2022
Faster Rates for Compressed Federated Learning with Client-Variance Reduction
Haoyu Zhao
Konstantin Burlachenko
Zhize Li
Peter Richtárik
FedML
172
16
0
24 Dec 2021
Escape saddle points by a simple gradient-descent based algorithm
Chenyi Zhang
Tongyang Li
ODL
92
15
0
28 Nov 2021
Faster Perturbed Stochastic Gradient Methods for Finding Local Minima
Zixiang Chen
Dongruo Zhou
Quanquan Gu
111
2
0
25 Oct 2021
DESTRESS: Computation-Optimal and Communication-Efficient Decentralized Nonconvex Finite-Sum Optimization
Boyue Li
Zhize Li
Yuejie Chi
132
23
0
04 Oct 2021
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Haoyu Zhao
Zhize Li
Peter Richtárik
FedML
107
31
0
10 Aug 2021
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
Zhize Li
Peter Richtárik
116
32
0
20 Jul 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
173
16
0
21 Mar 2021
ZeroSARAH: Efficient Nonconvex Finite-Sum Optimization with Zero Full Gradient Computation
Zhize Li
Slavomír Hanzely
Peter Richtárik
99
33
0
02 Mar 2021
Stochastic Gradient Langevin Dynamics with Variance Reduction
Zhishen Huang
Stephen Becker
104
9
0
12 Feb 2021
Bias-Variance Reduced Local SGD for Less Heterogeneous Federated Learning
Tomoya Murata
Taiji Suzuki
FedML
133
53
0
05 Feb 2021
Escape saddle points faster on manifolds via perturbed Riemannian stochastic recursive gradient
Andi Han
Junbin Gao
97
5
0
23 Oct 2020
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
196
139
0
25 Aug 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
133
40
0
12 Jun 2020
A Hybrid Stochastic Optimization Framework for Stochastic Composite Nonconvex Optimization
Quoc Tran-Dinh
Nhan H. Pham
T. Dzung
Lam M. Nguyen
117
55
0
08 Jul 2019
A Fast Anderson-Chebyshev Acceleration for Nonlinear Optimization
Zhize Li
Jian Li
105
20
0
07 Sep 2018
Stochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference
Zhize Li
Tianyi Zhang
Shuyu Cheng
Jun Yu Li
Jian Li
BDL
95
19
0
29 Mar 2018
1