
Title |
|---|
![]() Breaking (Global) Barriers in Parallel Stochastic Optimization with Wait-Avoiding Group AveragingIEEE Transactions on Parallel and Distributed Systems (TPDS), 2020 |
![]() Detached Error Feedback for Distributed SGD with Random SparsificationInternational Conference on Machine Learning (ICML), 2020 |
![]() Dualize, Split, Randomize: Toward Fast Nonsmooth Optimization AlgorithmsJournal of Optimization Theory and Applications (JOTA), 2020 |
![]() A Unified Theory of Decentralized SGD with Changing Topology and Local
UpdatesInternational Conference on Machine Learning (ICML), 2020 |
![]() Communication-Efficient Distributed SGD with Error-Feedback, RevisitedInternational Journal of Computational Intelligence Systems (IJCIS), 2020 |
![]() Adaptive Federated OptimizationInternational Conference on Learning Representations (ICLR), 2020 |
![]() Stochastic-Sign SGD for Federated Learning with Theoretical GuaranteesIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020 |
![]() Differentially Quantized Gradient MethodsIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2020 |
![]() Adaptive Gradient Sparsification for Efficient Federated Learning: An
Online Learning ApproachIEEE International Conference on Distributed Computing Systems (ICDCS), 2020 |
![]() Tighter Theory for Local SGD on Identical and Heterogeneous DataInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2019 |
![]() RATQ: A Universal Fixed-Length Quantizer for Stochastic OptimizationIEEE Transactions on Information Theory (IEEE Trans. Inf. Theory), 2019 |