ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.08313
  4. Cited By
Adaptive Communication Strategies to Achieve the Best Error-Runtime
  Trade-off in Local-Update SGD

Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD

19 October 2018
Jianyu Wang
Gauri Joshi
    FedML
ArXivPDFHTML

Papers citing "Adaptive Communication Strategies to Achieve the Best Error-Runtime Trade-off in Local-Update SGD"

36 / 36 papers shown
Title
Communication-Efficient Federated Fine-Tuning of Language Models via Dynamic Update Schedules
Communication-Efficient Federated Fine-Tuning of Language Models via Dynamic Update Schedules
Michail Theologitis
V. Samoladas
Antonios Deligiannakis
29
0
0
07 May 2025
Communication Optimization for Decentralized Learning atop Bandwidth-limited Edge Networks
Communication Optimization for Decentralized Learning atop Bandwidth-limited Edge Networks
Tingyang Sun
Tuan Nguyen
Ting He
35
0
0
16 Apr 2025
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
EDiT: A Local-SGD-Based Efficient Distributed Training Method for Large Language Models
Jialiang Cheng
Ning Gao
Yun Yue
Zhiling Ye
Jiadi Jiang
Jian Sha
OffRL
77
0
0
10 Dec 2024
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Sketched Adaptive Federated Deep Learning: A Sharp Convergence Analysis
Zhijie Chen
Qiaobo Li
A. Banerjee
FedML
35
0
0
11 Nov 2024
No Need to Talk: Asynchronous Mixture of Language Models
No Need to Talk: Asynchronous Mixture of Language Models
Anastasiia Filippova
Angelos Katharopoulos
David Grangier
Ronan Collobert
MoE
39
0
0
04 Oct 2024
Distributed Extra-gradient with Optimal Complexity and Communication
  Guarantees
Distributed Extra-gradient with Optimal Complexity and Communication Guarantees
Ali Ramezani-Kebrya
Kimon Antonakopoulos
Igor Krawczuk
Justin Deschenaux
V. Cevher
34
2
0
17 Aug 2023
Faster Federated Learning with Decaying Number of Local SGD Steps
Faster Federated Learning with Decaying Number of Local SGD Steps
Jed Mills
Jia Hu
Geyong Min
FedML
30
7
0
16 May 2023
Federated Learning with Flexible Control
Federated Learning with Flexible Control
Shiqiang Wang
Jake B. Perazzone
Mingyue Ji
Kevin S. Chan
FedML
28
17
0
16 Dec 2022
Communication-Efficient Federated Learning for Heterogeneous Edge
  Devices Based on Adaptive Gradient Quantization
Communication-Efficient Federated Learning for Heterogeneous Edge Devices Based on Adaptive Gradient Quantization
Heting Liu
Fang He
Guohong Cao
FedML
MQ
21
24
0
16 Dec 2022
Federated Hypergradient Descent
Federated Hypergradient Descent
A. K. Kan
FedML
37
3
0
03 Nov 2022
GradSkip: Communication-Accelerated Local Gradient Methods with Better
  Computational Complexity
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
A. Maranjyan
M. Safaryan
Peter Richtárik
34
13
0
28 Oct 2022
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
STSyn: Speeding Up Local SGD with Straggler-Tolerant Synchronization
Feng Zhu
Jingjing Zhang
Xin Eric Wang
26
3
0
06 Oct 2022
Efficient Adaptive Federated Optimization of Federated Learning for IoT
Efficient Adaptive Federated Optimization of Federated Learning for IoT
Zunming Chen
Hongyan Cui
Ensen Wu
Yu Xi
27
0
0
23 Jun 2022
Evaluation and Analysis of Different Aggregation and Hyperparameter
  Selection Methods for Federated Brain Tumor Segmentation
Evaluation and Analysis of Different Aggregation and Hyperparameter Selection Methods for Federated Brain Tumor Segmentation
Ece Isik Polat
Gorkem Polat
Altan Koçyiğit
A. Temi̇zel
OOD
FedML
22
3
0
16 Feb 2022
On the Convergence of Heterogeneous Federated Learning with Arbitrary
  Adaptive Online Model Pruning
On the Convergence of Heterogeneous Federated Learning with Arbitrary Adaptive Online Model Pruning
Hanhan Zhou
Tian-Shing Lan
Guru Venkataramani
Wenbo Ding
FedML
24
6
0
27 Jan 2022
Resource-Efficient Federated Learning
Resource-Efficient Federated Learning
A. Abdelmoniem
Atal Narayan Sahu
Marco Canini
Suhaib A. Fahmy
FedML
25
52
0
01 Nov 2021
Cost-Effective Federated Learning in Mobile Edge Networks
Cost-Effective Federated Learning in Mobile Edge Networks
Bing Luo
Xiang Li
Shiqiang Wang
Jianwei Huang
Leandros Tassiulas
FedML
49
72
0
12 Sep 2021
FedChain: Chained Algorithms for Near-Optimal Communication Cost in
  Federated Learning
FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning
Charlie Hou
K. K. Thekumparampil
Giulia Fanti
Sewoong Oh
FedML
30
14
0
16 Aug 2021
A Decentralized Federated Learning Framework via Committee Mechanism
  with Convergence Guarantee
A Decentralized Federated Learning Framework via Committee Mechanism with Convergence Guarantee
Chunjiang Che
Xiaoli Li
Chuan Chen
Xiaoyu He
Zibin Zheng
FedML
26
72
0
01 Aug 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
184
411
0
14 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
23
30
0
03 Jul 2021
Towards Demystifying Serverless Machine Learning Training
Towards Demystifying Serverless Machine Learning Training
Jiawei Jiang
Shaoduo Gan
Yue Liu
Fanlin Wang
Gustavo Alonso
Ana Klimovic
Ankit Singla
Wentao Wu
Ce Zhang
19
121
0
17 May 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity
  and Sparse Gradients
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
Throughput-Optimal Topology Design for Cross-Silo Federated Learning
Othmane Marfoq
Chuan Xu
Giovanni Neglia
Richard Vidal
FedML
64
85
0
23 Oct 2020
Tackling the Objective Inconsistency Problem in Heterogeneous Federated
  Optimization
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
Jianyu Wang
Qinghua Liu
Hao Liang
Gauri Joshi
H. Vincent Poor
MoMe
FedML
14
1,295
0
15 Jul 2020
Robust Federated Learning: The Case of Affine Distribution Shifts
Robust Federated Learning: The Case of Affine Distribution Shifts
Amirhossein Reisizadeh
Farzan Farnia
Ramtin Pedarsani
Ali Jadbabaie
FedML
OOD
30
162
0
16 Jun 2020
Communication-Efficient Distributed Stochastic AUC Maximization with
  Deep Neural Networks
Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks
Zhishuai Guo
Mingrui Liu
Zhuoning Yuan
Li Shen
Wei Liu
Tianbao Yang
27
42
0
05 May 2020
Communication-Efficient Edge AI: Algorithms and Systems
Communication-Efficient Edge AI: Algorithms and Systems
Yuanming Shi
Kai Yang
Tao Jiang
Jun Zhang
Khaled B. Letaief
GNN
17
326
0
22 Feb 2020
Adaptive Gradient Sparsification for Efficient Federated Learning: An
  Online Learning Approach
Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach
Pengchao Han
Shiqiang Wang
K. Leung
FedML
27
175
0
14 Jan 2020
Communication-Efficient Local Decentralized SGD Methods
Communication-Efficient Local Decentralized SGD Methods
Xiang Li
Wenhao Yang
Shusen Wang
Zhihua Zhang
22
53
0
21 Oct 2019
Model Pruning Enables Efficient Federated Learning on Edge Devices
Model Pruning Enables Efficient Federated Learning on Edge Devices
Yuang Jiang
Shiqiang Wang
Victor Valls
Bongjun Ko
Wei-Han Lee
Kin K. Leung
Leandros Tassiulas
30
443
0
26 Sep 2019
Gradient Descent with Compressed Iterates
Gradient Descent with Compressed Iterates
Ahmed Khaled
Peter Richtárik
16
22
0
10 Sep 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
21
159
0
23 May 2019
Don't Use Large Mini-Batches, Use Local SGD
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
39
429
0
22 Aug 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in
  Distributed SGD
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
12
193
0
03 Mar 2018
Optimal Distributed Online Prediction using Mini-Batches
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1