ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.09714
  4. Cited By
An Accelerated Distributed Stochastic Gradient Method with Momentum

An Accelerated Distributed Stochastic Gradient Method with Momentum

15 February 2024
Kun-Yen Huang
Shi Pu
Angelia Nedić
ArXivPDFHTML

Papers citing "An Accelerated Distributed Stochastic Gradient Method with Momentum"

6 / 6 papers shown
Title
A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration
A Bias-Correction Decentralized Stochastic Gradient Algorithm with Momentum Acceleration
Yuchen Hu
Xi Chen
Weidong Liu
Xiaojun Mao
57
0
0
31 Jan 2025
OledFL: Unleashing the Potential of Decentralized Federated Learning via
  Opposite Lookahead Enhancement
OledFL: Unleashing the Potential of Decentralized Federated Learning via Opposite Lookahead Enhancement
Qinglun Li
Miao Zhang
Mengzhu Wang
Quanjun Yin
Li Shen
OODD
FedML
14
0
0
09 Oct 2024
On the Complexity of Decentralized Smooth Nonconvex Finite-Sum Optimization
On the Complexity of Decentralized Smooth Nonconvex Finite-Sum Optimization
Luo Luo
Yunyan Bai
Lesi Chen
Yuxing Liu
Haishan Ye
21
8
0
25 Oct 2022
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
46
60
0
24 Apr 2021
Swarming for Faster Convergence in Stochastic Optimization
Swarming for Faster Convergence in Stochastic Optimization
Shi Pu
Alfredo García
16
15
0
11 Jun 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
119
1,194
0
16 Aug 2016
1