Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.04434
Cited By
signSGD: Compressed Optimisation for Non-Convex Problems
13 February 2018
Jeremy Bernstein
Yu-Xiang Wang
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
ODL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"signSGD: Compressed Optimisation for Non-Convex Problems"
8 / 158 papers shown
Title
99% of Distributed Optimization is a Waste of Time: The Issue and How to Fix it
Konstantin Mishchenko
Filip Hanzely
Peter Richtárik
11
13
0
27 Jan 2019
A Distributed Synchronous SGD Algorithm with Global Top-
k
k
k
Sparsification for Low Bandwidth Networks
S. Shi
Qiang-qiang Wang
Kaiyong Zhao
Zhenheng Tang
Yuxin Wang
Xiang Huang
Xiaowen Chu
32
134
0
14 Jan 2019
A Sufficient Condition for Convergences of Adam and RMSProp
Fangyu Zou
Li Shen
Zequn Jie
Weizhong Zhang
Wei Liu
17
361
0
23 Nov 2018
Policy Gradient in Partially Observable Environments: Approximation and Convergence
Kamyar Azizzadenesheli
Manish Kumar Bera
Anima Anandkumar
OffRL
20
8
0
18 Oct 2018
signSGD with Majority Vote is Communication Efficient And Fault Tolerant
Jeremy Bernstein
Jiawei Zhao
Kamyar Azizzadenesheli
Anima Anandkumar
FedML
6
46
0
11 Oct 2018
Don't Use Large Mini-Batches, Use Local SGD
Tao R. Lin
Sebastian U. Stich
Kumar Kshitij Patel
Martin Jaggi
25
429
0
22 Aug 2018
Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication
Felix Sattler
Simon Wiedemann
K. Müller
Wojciech Samek
MQ
11
210
0
22 May 2018
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
130
1,198
0
16 Aug 2016
Previous
1
2
3
4