Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.07599
Cited By
Sparsified SGD with Memory
20 September 2018
Sebastian U. Stich
Jean-Baptiste Cordonnier
Martin Jaggi
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Sparsified SGD with Memory"
50 / 141 papers shown
Title
Towards Efficient Communications in Federated Learning: A Contemporary Survey
Zihao Zhao
Yuzhu Mao
Yang Liu
Linqi Song
Ouyang Ye
Xinlei Chen
Wenbo Ding
FedML
51
59
0
02 Aug 2022
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
Gaoyuan Zhang
Songtao Lu
Yihua Zhang
Xiangyi Chen
Pin-Yu Chen
Quanfu Fan
Lee Martie
L. Horesh
Min-Fong Hong
Sijia Liu
OOD
24
12
0
13 Jun 2022
Neurotoxin: Durable Backdoors in Federated Learning
Zhengming Zhang
Ashwinee Panda
Linyue Song
Yaoqing Yang
Michael W. Mahoney
Joseph E. Gonzalez
Kannan Ramchandran
Prateek Mittal
FedML
27
129
0
12 Jun 2022
Gradient Obfuscation Gives a False Sense of Security in Federated Learning
Kai Yue
Richeng Jin
Chau-Wai Wong
D. Baron
H. Dai
FedML
28
46
0
08 Jun 2022
Communication-Efficient Distributionally Robust Decentralized Learning
Matteo Zecchin
Marios Kountouris
David Gesbert
20
9
0
31 May 2022
Efficient-Adam: Communication-Efficient Distributed Adam
Congliang Chen
Li Shen
Wei Liu
Z. Luo
23
19
0
28 May 2022
Communication-Efficient Adaptive Federated Learning
Yujia Wang
Lu Lin
Jinghui Chen
FedML
21
70
0
05 May 2022
FedCau: A Proactive Stop Policy for Communication and Computation Efficient Federated Learning
Afsaneh Mahmoudi
H. S. Ghadikolaei
José Hélio da Cruz Júnior
Carlo Fischione
19
9
0
16 Apr 2022
Convert, compress, correct: Three steps toward communication-efficient DNN training
Zhongzhu Chen
Eduin E. Hernandez
Yu-Chih Huang
Stefano Rini
15
0
0
17 Mar 2022
Linear Stochastic Bandits over a Bit-Constrained Channel
A. Mitra
Hamed Hassani
George J. Pappas
34
8
0
02 Mar 2022
DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training
Joya Chen
Kai Xu
Yuhui Wang
Yifei Cheng
Angela Yao
19
7
0
28 Feb 2022
Survey on Large Scale Neural Network Training
Julia Gusak
Daria Cherniuk
Alena Shilova
A. Katrutsa
Daniel Bershatsky
...
Lionel Eyraud-Dubois
Oleg Shlyazhko
Denis Dimitrov
Ivan V. Oseledets
Olivier Beaumont
22
10
0
21 Feb 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
19
47
0
15 Feb 2022
Distributed Learning With Sparsified Gradient Differences
Yicheng Chen
Rick S. Blum
Martin Takáč
Brian M. Sadler
21
15
0
05 Feb 2022
BEER: Fast
O
(
1
/
T
)
O(1/T)
O
(
1/
T
)
Rate for Decentralized Nonconvex Optimization with Communication Compression
Haoyu Zhao
Boyue Li
Zhize Li
Peter Richtárik
Yuejie Chi
19
48
0
31 Jan 2022
Variance-Reduced Heterogeneous Federated Learning via Stratified Client Selection
Guangyuan Shen
D. Gao
Libin Yang
Fang Zhou
Duanxiao Song
Wei Lou
Shirui Pan
FedML
14
8
0
15 Jan 2022
Sparsified Secure Aggregation for Privacy-Preserving Federated Learning
Irem Ergun
Hasin Us Sami
Başak Güler
FedML
28
25
0
23 Dec 2021
Optimal Rate Adaption in Federated Learning with Compressed Communications
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
Jiangchuan Liu
FedML
37
38
0
13 Dec 2021
Communication-Efficient Distributed Learning via Sparse and Adaptive Stochastic Gradient
Xiaoge Deng
Dongsheng Li
Tao Sun
Xicheng Lu
FedML
18
0
0
08 Dec 2021
Distributed Adaptive Learning Under Communication Constraints
Marco Carpentiero
Vincenzo Matta
A. H. Sayed
22
17
0
03 Dec 2021
Wyner-Ziv Gradient Compression for Federated Learning
Kai Liang
Huiru Zhong
Haoning Chen
Youlong Wu
FedML
16
8
0
16 Nov 2021
Large-Scale Deep Learning Optimizations: A Comprehensive Survey
Xiaoxin He
Fuzhao Xue
Xiaozhe Ren
Yang You
24
14
0
01 Nov 2021
TESSERACT: Gradient Flip Score to Secure Federated Learning Against Model Poisoning Attacks
Atul Sharma
Wei Chen
Joshua C. Zhao
Qiang Qiu
Somali Chaterji
S. Bagchi
FedML
AAML
44
5
0
19 Oct 2021
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
Hui-Po Wang
Sebastian U. Stich
Yang He
Mario Fritz
FedML
AI4CE
28
46
0
11 Oct 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
46
44
0
07 Oct 2021
Comfetch: Federated Learning of Large Networks on Constrained Clients via Sketching
Tahseen Rabbani
Brandon Yushan Feng
Marco Bornstein
Kyle Rui Sang
Yifan Yang
Arjun Rajkumar
A. Varshney
Furong Huang
FedML
51
2
0
17 Sep 2021
On the Convergence of Decentralized Adaptive Gradient Methods
Xiangyi Chen
Belhal Karimi
Weijie Zhao
Ping Li
19
21
0
07 Sep 2021
EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Learning
S. Vargaftik
Ran Ben-Basat
Amit Portnoy
Gal Mendelson
Y. Ben-Itzhak
Michael Mitzenmacher
FedML
22
45
0
19 Aug 2021
Decentralized Composite Optimization with Compression
Yao Li
Xiaorui Liu
Jiliang Tang
Ming Yan
Kun Yuan
19
9
0
10 Aug 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
24
9
0
04 Aug 2021
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
43
54
0
02 Aug 2021
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
184
411
0
14 Jul 2021
BAGUA: Scaling up Distributed Learning with System Relaxations
Shaoduo Gan
Xiangru Lian
Rui Wang
Jianbin Chang
Chengjun Liu
...
Jiawei Jiang
Binhang Yuan
Sen Yang
Ji Liu
Ce Zhang
23
30
0
03 Jul 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
20
77
0
05 Jun 2021
Slashing Communication Traffic in Federated Learning by Transmitting Clustered Model Updates
Laizhong Cui
Xiaoxin Su
Yipeng Zhou
Yi Pan
FedML
30
35
0
10 May 2021
From Distributed Machine Learning to Federated Learning: A Survey
Ji Liu
Jizhou Huang
Yang Zhou
Xuhong Li
Shilei Ji
Haoyi Xiong
Dejing Dou
FedML
OOD
49
243
0
29 Apr 2021
ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Chia-Yu Chen
Jiamin Ni
Songtao Lu
Xiaodong Cui
Pin-Yu Chen
...
Naigang Wang
Swagath Venkataramani
Vijayalakshmi Srinivasan
Wei Zhang
K. Gopalakrishnan
27
18
0
21 Apr 2021
Distributed Learning in Wireless Networks: Recent Progress and Future Challenges
Mingzhe Chen
Deniz Gündüz
Kaibin Huang
Walid Saad
M. Bennis
Aneta Vulgarakis Feljan
H. Vincent Poor
35
401
0
05 Apr 2021
Federated Learning: A Signal Processing Perspective
Tomer Gafni
Nir Shlezinger
Kobi Cohen
Yonina C. Eldar
H. Vincent Poor
FedML
21
129
0
31 Mar 2021
Learned Gradient Compression for Distributed Deep Learning
L. Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
32
45
0
16 Mar 2021
Efficient Randomized Subspace Embeddings for Distributed Optimization under a Communication Budget
R. Saha
Mert Pilanci
Andrea J. Goldsmith
19
5
0
13 Mar 2021
Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices
Max Ryabinin
Eduard A. Gorbunov
Vsevolod Plokhotnyuk
Gennady Pekhimenko
32
31
0
04 Mar 2021
On the Utility of Gradient Compression in Distributed Training Systems
Saurabh Agarwal
Hongyi Wang
Shivaram Venkataraman
Dimitris Papailiopoulos
28
46
0
28 Feb 2021
Experiments with Rich Regime Training for Deep Learning
Xinyan Li
A. Banerjee
29
2
0
26 Feb 2021
Distributed Second Order Methods with Fast Rates and Compressed Communication
Rustem Islamov
Xun Qian
Peter Richtárik
21
51
0
14 Feb 2021
Communication-efficient Distributed Cooperative Learning with Compressed Beliefs
Taha Toghani
César A. Uribe
22
15
0
14 Feb 2021
Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients
A. Mitra
Rayana H. Jaafar
George J. Pappas
Hamed Hassani
FedML
55
157
0
14 Feb 2021
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
37
84
0
04 Feb 2021
Federated Learning over Wireless Device-to-Device Networks: Algorithms and Convergence Analysis
Hong Xing
Osvaldo Simeone
Suzhi Bi
42
92
0
29 Jan 2021
Faster Non-Convex Federated Learning via Global and Local Momentum
Rudrajit Das
Anish Acharya
Abolfazl Hashemi
Sujay Sanghavi
Inderjit S. Dhillon
Ufuk Topcu
FedML
29
82
0
07 Dec 2020
Previous
1
2
3
Next