ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07013
  4. Cited By
A Unified Analysis of Stochastic Gradient Methods for Nonconvex
  Federated Optimization

A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization

12 June 2020
Zhize Li
Peter Richtárik
    FedML
ArXivPDFHTML

Papers citing "A Unified Analysis of Stochastic Gradient Methods for Nonconvex Federated Optimization"

10 / 10 papers shown
Title
Coresets for Vertical Federated Learning: Regularized Linear Regression
  and $K$-Means Clustering
Coresets for Vertical Federated Learning: Regularized Linear Regression and KKK-Means Clustering
Lingxiao Huang
Zhize Li
Jialin Sun
Haoyu Zhao
FedML
31
9
0
26 Oct 2022
A simplified convergence theory for Byzantine resilient stochastic
  gradient descent
A simplified convergence theory for Byzantine resilient stochastic gradient descent
Lindon Roberts
E. Smyth
23
3
0
25 Aug 2022
Stochastic Gradient Methods with Preconditioned Updates
Stochastic Gradient Methods with Preconditioned Updates
Abdurakhmon Sadiev
Aleksandr Beznosikov
Abdulla Jasem Almansoori
Dmitry Kamzolov
R. Tappenden
Martin Takáč
ODL
21
9
0
01 Jun 2022
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with
  Communication Compression
BEER: Fast O(1/T)O(1/T)O(1/T) Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
19
48
0
31 Jan 2022
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
30
14
0
21 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
27
31
0
04 Mar 2021
MARINA: Faster Non-Convex Distributed Learning with Compression
MARINA: Faster Non-Convex Distributed Learning with Compression
Eduard A. Gorbunov
Konstantin Burlachenko
Zhize Li
Peter Richtárik
28
108
0
15 Feb 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for
  Nonconvex Optimization
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
24
125
0
25 Aug 2020
Acceleration for Compressed Gradient Descent in Distributed and
  Federated Optimization
Acceleration for Compressed Gradient Descent in Distributed and Federated Optimization
Zhize Li
D. Kovalev
Xun Qian
Peter Richtárik
FedML
AI4CE
18
133
0
26 Feb 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
133
1,198
0
16 Aug 2016
1