ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.05109
  4. Cited By
Toward Communication Efficient Adaptive Gradient Method

Toward Communication Efficient Adaptive Gradient Method

10 September 2021
Xiangyi Chen
Xiaoyun Li
P. Li
    FedML
ArXivPDFHTML

Papers citing "Toward Communication Efficient Adaptive Gradient Method"

9 / 9 papers shown
Title
Efficient Federated Learning via Local Adaptive Amended Optimizer with
  Linear Speedup
Efficient Federated Learning via Local Adaptive Amended Optimizer with Linear Speedup
Yan Sun
Li Shen
Hao Sun
Liang Ding
Dacheng Tao
FedML
19
16
0
30 Jul 2023
Analysis of Error Feedback in Federated Non-Convex Optimization with
  Biased Compression
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
32
4
0
25 Nov 2022
Fast Adaptive Federated Bilevel Optimization
Fast Adaptive Federated Bilevel Optimization
Feihu Huang
FedML
20
7
0
02 Nov 2022
Communication-Efficient Adaptive Federated Learning
Communication-Efficient Adaptive Federated Learning
Yujia Wang
Lu Lin
Jinghui Chen
FedML
24
70
0
05 May 2022
Communication-Efficient TeraByte-Scale Model Training Framework for
  Online Advertising
Communication-Efficient TeraByte-Scale Model Training Framework for Online Advertising
Weijie Zhao
Xuewu Jiao
Mingqing Hu
Xiaoyun Li
X. Zhang
Ping Li
3DV
32
8
0
05 Jan 2022
On the Convergence of Decentralized Adaptive Gradient Methods
On the Convergence of Decentralized Adaptive Gradient Methods
Xiangyi Chen
Belhal Karimi
Weijie Zhao
Ping Li
21
21
0
07 Sep 2021
Moshpit SGD: Communication-Efficient Decentralized Training on
  Heterogeneous Unreliable Devices
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
35
31
0
04 Mar 2021
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep
  Learning Ads Systems
Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems
Weijie Zhao
Deping Xie
Ronglai Jia
Yulei Qian
Rui Ding
Mingming Sun
P. Li
MoE
59
150
0
12 Mar 2020
FedPAQ: A Communication-Efficient Federated Learning Method with
  Periodic Averaging and Quantization
FedPAQ: A Communication-Efficient Federated Learning Method with Periodic Averaging and Quantization
Amirhossein Reisizadeh
Aryan Mokhtari
Hamed Hassani
Ali Jadbabaie
Ramtin Pedarsani
FedML
174
760
0
28 Sep 2019
1