Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1511.05950
Cited By
Staleness-aware Async-SGD for Distributed Deep Learning
18 November 2015
Wei Zhang
Suyog Gupta
Xiangru Lian
Ji Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Staleness-aware Async-SGD for Distributed Deep Learning"
46 / 46 papers shown
Title
No Need to Talk: Asynchronous Mixture of Language Models
Anastasiia Filippova
Angelos Katharopoulos
David Grangier
Ronan Collobert
MoE
49
0
0
04 Oct 2024
Ordered Momentum for Asynchronous SGD
Chang-Wei Shi
Yi-Rui Yang
Wu-Jun Li
ODL
67
0
0
27 Jul 2024
Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework
Siyuan Yu
Wei Chen
H. V. Poor
34
0
0
17 Jun 2024
Dynamic Client Clustering, Bandwidth Allocation, and Workload Optimization for Semi-synchronous Federated Learning
Liang Yu
Xiang Sun
Rana Albelaihi
Chaeeun Park
Sihua Shao
FedML
50
1
0
11 Mar 2024
AEDFL: Efficient Asynchronous Decentralized Federated Learning with Heterogeneous Devices
Ji Liu
Tianshi Che
Yang Zhou
Ruoming Jin
H. Dai
Dejing Dou
P. Valduriez
49
13
0
18 Dec 2023
Revolutionizing Wireless Networks with Federated Learning: A Comprehensive Review
Sajjad Emdadi Mahdimahalleh
AI4CE
43
0
0
01 Aug 2023
Robust Fully-Asynchronous Methods for Distributed Training over General Architecture
Zehan Zhu
Ye Tian
Yan Huang
Jinming Xu
Shibo He
OOD
40
2
0
21 Jul 2023
FedML Parrot: A Scalable Federated Learning System via Heterogeneity-aware Scheduling on Sequential and Hierarchical Training
Zhenheng Tang
Xiaowen Chu
Ryan Yide Ran
Sunwoo Lee
Shaoshuai Shi
Yonggang Zhang
Yuxin Wang
Alex Liang
A. Avestimehr
Chaoyang He
FedML
33
10
0
03 Mar 2023
HiFlash: Communication-Efficient Hierarchical Federated Learning with Adaptive Staleness Control and Heterogeneity-aware Client-Edge Association
Qiong Wu
Xu Chen
Ouyang Tao
Zhi Zhou
Xiaoxi Zhang
Shusen Yang
Junshan Zhang
42
44
0
16 Jan 2023
Latency Aware Semi-synchronous Client Selection and Model Aggregation for Wireless Federated Learning
Liang Yu
Xiang Sun
Rana Albelaihi
Chen Yi
FedML
32
13
0
19 Oct 2022
Approximate Computing and the Efficient Machine Learning Expedition
J. Henkel
Hai Helen Li
A. Raghunathan
M. Tahoori
Swagath Venkataramani
Xiaoxuan Yang
Georgios Zervakis
28
17
0
02 Oct 2022
DarKnight: An Accelerated Framework for Privacy and Integrity Preserving Deep Learning Using Trusted Hardware
H. Hashemi
Yongqin Wang
M. Annavaram
FedML
28
58
0
30 Jun 2022
RevBiFPN: The Fully Reversible Bidirectional Feature Pyramid Network
Vitaliy Chiley
Vithursan Thangarasa
Abhay Gupta
Anshul Samar
Joel Hestness
D. DeCoste
52
8
0
28 Jun 2022
FuncPipe: A Pipelined Serverless Framework for Fast and Cost-efficient Training of Deep Learning Models
Yunzhuo Liu
Bo Jiang
Tian Guo
Zimeng Huang
Wen-ping Ma
Xinbing Wang
Chenghu Zhou
34
9
0
28 Apr 2022
FederatedScope: A Flexible Federated Learning Platform for Heterogeneity
Yuexiang Xie
Zhen Wang
Dawei Gao
Daoyuan Chen
Liuyi Yao
Weirui Kuang
Yaliang Li
Bolin Ding
Jingren Zhou
FedML
37
88
0
11 Apr 2022
DNN Training Acceleration via Exploring GPGPU Friendly Sparsity
Zhuoran Song
Yihong Xu
Han Li
Naifeng Jing
Xiaoyao Liang
Li Jiang
36
3
0
11 Mar 2022
Addressing modern and practical challenges in machine learning: A survey of online federated and transfer learning
Shuang Dai
Fanlin Meng
FedML
OnRL
47
21
0
07 Feb 2022
Collaborative Learning over Wireless Networks: An Introductory Overview
Emre Ozfatura
Deniz Gunduz
H. Vincent Poor
35
11
0
07 Dec 2021
Resource-Efficient Federated Learning
A. Abdelmoniem
Atal Narayan Sahu
Marco Canini
Suhaib A. Fahmy
FedML
37
55
0
01 Nov 2021
HyperJump: Accelerating HyperBand via Risk Modelling
Pedro Mendes
Maria Casimiro
Paolo Romano
David Garlan
22
8
0
05 Aug 2021
Device Scheduling and Update Aggregation Policies for Asynchronous Federated Learning
Chung-Hsuan Hu
Zheng Chen
Erik G. Larsson
FedML
33
29
0
23 Jul 2021
Parareal Neural Networks Emulating a Parallel-in-time Algorithm
Zhanyu Ma
Jiyang Xie
Jingyi Yu
AI4CE
33
9
0
16 Mar 2021
Consistent Lock-free Parallel Stochastic Gradient Descent for Fast and Stable Convergence
Karl Bäckström
Ivan Walulya
Marina Papatriantafilou
P. Tsigas
31
5
0
17 Feb 2021
Anytime Minibatch with Delayed Gradients
H. Al-Lawati
S. Draper
33
0
0
15 Dec 2020
PSO-PS: Parameter Synchronization with Particle Swarm Optimization for Distributed Training of Deep Neural Networks
Qing Ye
Y. Han
Yanan Sun
Jiancheng Lv
28
3
0
06 Sep 2020
DBS: Dynamic Batch Size For Distributed Deep Neural Network Training
Qing Ye
Yuhao Zhou
Mingjia Shi
Yanan Sun
Jiancheng Lv
22
11
0
23 Jul 2020
FLeet: Online Federated Learning via Staleness Awareness and Performance Prediction
Georgios Damaskinos
R. Guerraoui
Anne-Marie Kermarrec
Vlad Nitu
Rhicheek Patra
Francois Taiani
21
54
0
12 Jun 2020
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
Yucheng Lu
J. Nash
Christopher De Sa
FedML
37
12
0
14 May 2020
A Review of Privacy-preserving Federated Learning for the Internet-of-Things
Christopher Briggs
Zhong Fan
Péter András
31
15
0
24 Apr 2020
Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts
Max Ryabinin
Anton I. Gusev
FedML
27
48
0
10 Feb 2020
SAFA: a Semi-Asynchronous Protocol for Fast Federated Learning with Low Overhead
A. Masullo
Ligang He
Toby Perrett
Rui Mao
Carsten Maple
Majid Mirmehdi
25
301
0
03 Oct 2019
Taming Momentum in a Distributed Asynchronous Environment
Ido Hakimi
Saar Barkai
Moshe Gabel
Assaf Schuster
19
23
0
26 Jul 2019
Database Meets Deep Learning: Challenges and Opportunities
Wei Wang
Meihui Zhang
Gang Chen
H. V. Jagadish
Beng Chin Ooi
K. Tan
24
147
0
21 Jun 2019
Dynamic Mini-batch SGD for Elastic Distributed Training: Learning in the Limbo of Resources
Yanghua Peng
Hang Zhang
Yifei Ma
Tong He
Zhi-Li Zhang
Sheng Zha
Mu Li
28
23
0
26 Apr 2019
Speeding up Deep Learning with Transient Servers
Shijian Li
R. Walls
Lijie Xu
Tian Guo
30
12
0
28 Feb 2019
Incentive-based integration of useful work into blockchains
David Amar
Lior Zilpa
22
1
0
10 Jan 2019
MD-GAN: Multi-Discriminator Generative Adversarial Networks for Distributed Datasets
Corentin Hardy
Erwan Le Merrer
B. Sericola
GAN
27
181
0
09 Nov 2018
Adaptive Task Allocation for Mobile Edge Learning
Jin Zhu
Wei Zheng
31
32
0
09 Nov 2018
A Hitchhiker's Guide On Distributed Training of Deep Neural Networks
K. Chahal
Manraj Singh Grover
Kuntal Dey
3DH
OOD
6
53
0
28 Oct 2018
Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks
Soojeong Kim
Gyeong-In Yu
Hojin Park
Sungwoo Cho
Eunji Jeong
Hyeonmin Ha
Sanha Lee
Joo Seong Jeong
Byung-Gon Chun
23
73
0
08 Aug 2018
Slow and Stale Gradients Can Win the Race: Error-Runtime Trade-offs in Distributed SGD
Sanghamitra Dutta
Gauri Joshi
Soumyadip Ghosh
Parijat Dube
P. Nagpurkar
31
194
0
03 Mar 2018
Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
Tal Ben-Nun
Torsten Hoefler
GNN
33
704
0
26 Feb 2018
SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks
Sanchari Sen
Shubham Jain
Swagath Venkataramani
A. Raghunathan
24
30
0
07 Nov 2017
Efficient Training of Convolutional Neural Nets on Large Distributed Systems
Sameer Kumar
D. Sreedhar
Vaibhav Saxena
Yogish Sabharwal
Ashish Verma
35
4
0
02 Nov 2017
Collaborative Deep Learning in Fixed Topology Networks
Zhanhong Jiang
Aditya Balu
Chinmay Hegde
Soumik Sarkar
FedML
32
179
0
23 Jun 2017
TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning
W. Wen
Cong Xu
Feng Yan
Chunpeng Wu
Yandan Wang
Yiran Chen
Hai Helen Li
52
984
0
22 May 2017
1