ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12677
  4. Cited By
Distributed Stochastic Optimization under a General Variance Condition

Distributed Stochastic Optimization under a General Variance Condition

30 January 2023
Kun-Yen Huang
Xiao Li
Shin-Yi Pu
    FedML
ArXivPDFHTML

Papers citing "Distributed Stochastic Optimization under a General Variance Condition"

6 / 6 papers shown
Title
An Accelerated Distributed Stochastic Gradient Method with Momentum
An Accelerated Distributed Stochastic Gradient Method with Momentum
Kun-Yen Huang
Shi Pu
Angelia Nedić
9
8
0
15 Feb 2024
Momentum Benefits Non-IID Federated Learning Simply and Provably
Momentum Benefits Non-IID Federated Learning Simply and Provably
Ziheng Cheng
Xinmeng Huang
Pengfei Wu
Kun Yuan
FedML
18
16
0
28 Jun 2023
Distributed Random Reshuffling Methods with Improved Convergence
Distributed Random Reshuffling Methods with Improved Convergence
Kun-Yen Huang
Linli Zhou
Shi Pu
15
4
0
21 Jun 2023
Global Convergence and Stability of Stochastic Gradient Descent
Global Convergence and Stability of Stochastic Gradient Descent
V. Patel
Shushu Zhang
Bowen Tian
18
22
0
04 Oct 2021
Swarming for Faster Convergence in Stochastic Optimization
Swarming for Faster Convergence in Stochastic Optimization
Shi Pu
Alfredo García
14
15
0
11 Jun 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,194
0
16 Aug 2016
1