Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.02341
Cited By
v1
v2
v3
v4 (latest)
A Fast Anderson-Chebyshev Acceleration for Nonlinear Optimization
7 September 2018
Zhize Li
Jian Li
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Fast Anderson-Chebyshev Acceleration for Nonlinear Optimization"
7 / 7 papers shown
Title
Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration
Qinglun Li
Miao Zhang
Yingqi Liu
Quanjun Yin
Li Shen
Xiaochun Cao
FedML
94
0
0
09 Oct 2024
FedPAGE: A Fast Local Stochastic Gradient Method for Communication-Efficient Federated Learning
Haoyu Zhao
Zhize Li
Peter Richtárik
FedML
79
29
0
10 Aug 2021
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
Zhize Li
Peter Richtárik
70
30
0
20 Jul 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
113
14
0
21 Mar 2021
Acceleration via Fractal Learning Rate Schedules
Naman Agarwal
Surbhi Goel
Cyril Zhang
78
18
0
01 Mar 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
102
130
0
25 Aug 2020
A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization
Zhize Li
Peter Richtárik
FedML
93
36
0
12 Jun 2020
1