ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.13727
  4. Cited By
PowerSGD: Practical Low-Rank Gradient Compression for Distributed
  Optimization

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

31 May 2019
Thijs Vogels
Sai Praneeth Karimireddy
Martin Jaggi
ArXivPDFHTML

Papers citing "PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization"

46 / 46 papers shown
Title
On Unbiased Low-Rank Approximation with Minimum Distortion
On Unbiased Low-Rank Approximation with Minimum Distortion
Leighton Barnes
Stephen Cameron
Benjamin Howard
12
0
0
12 May 2025
Beyond Low-rank Decomposition: A Shortcut Approach for Efficient On-Device Learning
Beyond Low-rank Decomposition: A Shortcut Approach for Efficient On-Device Learning
Le-Trung Nguyen
Ael Quélennec
Van-Tam Nguyen
Enzo Tartaglione
43
0
0
08 May 2025
FedFetch: Faster Federated Learning with Adaptive Downstream Prefetching
FedFetch: Faster Federated Learning with Adaptive Downstream Prefetching
Qifan Yan
Andrew Liu
Shiqi He
Mathias Lécuyer
Ivan Beschastnikh
FedML
36
0
0
21 Apr 2025
Striving for Simplicity: Simple Yet Effective Prior-Aware Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation
Striving for Simplicity: Simple Yet Effective Prior-Aware Pseudo-Labeling for Semi-Supervised Ultrasound Image Segmentation
Yaxiong Chen
Yujie Wang
Zixuan Zheng
Jingliang Hu
Yilei Shi
Shengwu Xiong
Xiao Xiang Zhu
Lichao Mou
52
1
0
18 Mar 2025
Accelerated Distributed Optimization with Compression and Error Feedback
Accelerated Distributed Optimization with Compression and Error Feedback
Yuan Gao
Anton Rodomanov
Jeremy Rack
Sebastian U. Stich
43
0
0
11 Mar 2025
LiteChain: A Lightweight Blockchain for Verifiable and Scalable Federated Learning in Massive Edge Networks
Handi Chen
Rui Zhou
Yun-Hin Chan
Zhihan Jiang
Xianhao Chen
Edith C. H. Ngai
50
0
0
06 Mar 2025
Trustworthiness of Stochastic Gradient Descent in Distributed Learning
Trustworthiness of Stochastic Gradient Descent in Distributed Learning
Hongyang Li
Caesar Wu
Mohammed Chadli
Said Mammar
Pascal Bouvry
44
1
0
28 Oct 2024
LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics
LDAdam: Adaptive Optimization from Low-Dimensional Gradient Statistics
Thomas Robert
M. Safaryan
Ionut-Vlad Modoranu
Dan Alistarh
ODL
31
2
0
21 Oct 2024
Ordered Momentum for Asynchronous SGD
Ordered Momentum for Asynchronous SGD
Chang-Wei Shi
Yi-Rui Yang
Wu-Jun Li
ODL
52
0
0
27 Jul 2024
Save It All: Enabling Full Parameter Tuning for Federated Large Language
  Models via Cycle Block Gradient Descent
Save It All: Enabling Full Parameter Tuning for Federated Large Language Models via Cycle Block Gradient Descent
Lin Wang
Zhichao Wang
Xiaoying Tang
37
1
0
17 Jun 2024
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization
Zhe Li
Bicheng Ying
Zidong Liu
Haibo Yang
Haibo Yang
FedML
59
3
0
24 May 2024
Investigation of Energy-efficient AI Model Architectures and Compression
  Techniques for "Green" Fetal Brain Segmentation
Investigation of Energy-efficient AI Model Architectures and Compression Techniques for "Green" Fetal Brain Segmentation
Szymon Mazurek
M. Pytlarz
Sylwia Malec
A. Crimi
24
0
0
03 Apr 2024
RS-DGC: Exploring Neighborhood Statistics for Dynamic Gradient
  Compression on Remote Sensing Image Interpretation
RS-DGC: Exploring Neighborhood Statistics for Dynamic Gradient Compression on Remote Sensing Image Interpretation
Weiying Xie
Zixuan Wang
Jitao Ma
Daixun Li
Yunsong Li
30
0
0
29 Dec 2023
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Kimad: Adaptive Gradient Compression with Bandwidth Awareness
Jihao Xin
Ivan Ilin
Shunkang Zhang
Marco Canini
Peter Richtárik
28
2
0
13 Dec 2023
Communication Compression for Byzantine Robust Learning: New Efficient
  Algorithms and Improved Rates
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
Ahmad Rammal
Kaja Gruntkowska
Nikita Fedin
Eduard A. Gorbunov
Peter Richtárik
35
5
0
15 Oct 2023
Accelerating Distributed ML Training via Selective Synchronization
Accelerating Distributed ML Training via Selective Synchronization
S. Tyagi
Martin Swany
FedML
24
3
0
16 Jul 2023
DropCompute: simple and more robust distributed synchronous training via
  compute variance reduction
DropCompute: simple and more robust distributed synchronous training via compute variance reduction
Niv Giladi
Shahar Gottlieb
Moran Shkolnik
A. Karnieli
Ron Banner
Elad Hoffer
Kfir Y. Levy
Daniel Soudry
25
2
0
18 Jun 2023
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL
  Training
GraVAC: Adaptive Compression for Communication-Efficient Distributed DL Training
S. Tyagi
Martin Swany
17
4
0
20 May 2023
MetaMorphosis: Task-oriented Privacy Cognizant Feature Generation for
  Multi-task Learning
MetaMorphosis: Task-oriented Privacy Cognizant Feature Generation for Multi-task Learning
Md. Adnan Arefeen
Zhouyu Li
M. Y. S. Uddin
Anupam Das
27
0
0
13 May 2023
Green Federated Learning
Green Federated Learning
Ashkan Yousefpour
Sheng Guo
Ashish Shenoy
Sayan Ghosh
Pierre Stock
Kiwan Maeng
Schalk-Willem Kruger
Michael G. Rabbat
Carole-Jean Wu
Ilya Mironov
FedML
AI4CE
36
10
0
26 Mar 2023
SWARM Parallelism: Training Large Models Can Be Surprisingly
  Communication-Efficient
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient
Max Ryabinin
Tim Dettmers
Michael Diskin
Alexander Borzunov
MoE
22
31
0
27 Jan 2023
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware
  Communication Compression
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Jaeyong Song
Jinkyu Yim
Jaewon Jung
Hongsun Jang
H. Kim
Youngsok Kim
Jinho Lee
GNN
8
25
0
24 Jan 2023
Does compressing activations help model parallel training?
Does compressing activations help model parallel training?
S. Bian
Dacheng Li
Hongyi Wang
Eric P. Xing
Shivaram Venkataraman
19
4
0
06 Jan 2023
Scaling Private Deep Learning with Low-Rank and Sparse Gradients
Scaling Private Deep Learning with Low-Rank and Sparse Gradients
Ryuichi Ito
Seng Pei Liew
Tsubasa Takahashi
Yuya Sasaki
Makoto Onizuka
18
1
0
06 Jul 2022
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
A Survey on Gradient Inversion: Attacks, Defenses and Future Directions
Rui Zhang
Song Guo
Junxiao Wang
Xin Xie
Dacheng Tao
27
36
0
15 Jun 2022
Federated Random Reshuffling with Compression and Variance Reduction
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
16
10
0
08 May 2022
projUNN: efficient method for training deep networks with unitary
  matrices
projUNN: efficient method for training deep networks with unitary matrices
B. Kiani
Randall Balestriero
Yann LeCun
S. Lloyd
34
32
0
10 Mar 2022
Survey on Large Scale Neural Network Training
Survey on Large Scale Neural Network Training
Julia Gusak
Daria Cherniuk
Alena Shilova
A. Katrutsa
Daniel Bershatsky
...
Lionel Eyraud-Dubois
Oleg Shlyazhko
Denis Dimitrov
Ivan V. Oseledets
Olivier Beaumont
22
10
0
21 Feb 2022
On the Convergence of Shallow Neural Network Training with Randomly
  Masked Neurons
On the Convergence of Shallow Neural Network Training with Randomly Masked Neurons
Fangshuo Liao
Anastasios Kyrillidis
27
16
0
05 Dec 2021
Improving Differentially Private SGD via Randomly Sparsified Gradients
Improving Differentially Private SGD via Randomly Sparsified Gradients
Junyi Zhu
Matthew B. Blaschko
21
5
0
01 Dec 2021
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern
  Error Feedback
EF21 with Bells & Whistles: Practical Algorithmic Extensions of Modern Error Feedback
Ilyas Fatkhullin
Igor Sokolov
Eduard A. Gorbunov
Zhize Li
Peter Richtárik
44
44
0
07 Oct 2021
Solon: Communication-efficient Byzantine-resilient Distributed Training
  via Redundant Gradients
Solon: Communication-efficient Byzantine-resilient Distributed Training via Redundant Gradients
Lingjiao Chen
Leshang Chen
Hongyi Wang
S. Davidson
Edgar Dobriban
FedML
24
1
0
04 Oct 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
17
9
0
04 Aug 2021
Rethinking gradient sparsification as total error minimization
Rethinking gradient sparsification as total error minimization
Atal Narayan Sahu
Aritra Dutta
A. Abdelmoniem
Trambak Banerjee
Marco Canini
Panos Kalnis
35
54
0
02 Aug 2021
A Field Guide to Federated Optimization
A Field Guide to Federated Optimization
Jianyu Wang
Zachary B. Charles
Zheng Xu
Gauri Joshi
H. B. McMahan
...
Mi Zhang
Tong Zhang
Chunxiang Zheng
Chen Zhu
Wennan Zhu
FedML
173
411
0
14 Jul 2021
FedNL: Making Newton-Type Methods Applicable to Federated Learning
FedNL: Making Newton-Type Methods Applicable to Federated Learning
M. Safaryan
Rustem Islamov
Xun Qian
Peter Richtárik
FedML
14
77
0
05 Jun 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,774
0
24 Feb 2021
Improving Neural Network Training in Low Dimensional Random Bases
Improving Neural Network Training in Low Dimensional Random Bases
Frithjof Gressmann
Zach Eaton-Rosen
Carlo Luschi
14
28
0
09 Nov 2020
Local SGD: Unified Theory and New Efficient Methods
Local SGD: Unified Theory and New Efficient Methods
Eduard A. Gorbunov
Filip Hanzely
Peter Richtárik
FedML
19
108
0
03 Nov 2020
Communication Efficient Distributed Learning with Censored, Quantized,
  and Generalized Group ADMM
Communication Efficient Distributed Learning with Censored, Quantized, and Generalized Group ADMM
Chaouki Ben Issaid
Anis Elgabli
Jihong Park
M. Bennis
Mérouane Debbah
FedML
21
13
0
14 Sep 2020
On Communication Compression for Distributed Optimization on
  Heterogeneous Data
On Communication Compression for Distributed Optimization on Heterogeneous Data
Sebastian U. Stich
45
22
0
04 Sep 2020
The OARF Benchmark Suite: Characterization and Implications for
  Federated Learning Systems
The OARF Benchmark Suite: Characterization and Implications for Federated Learning Systems
Sixu Hu
Yuan N. Li
Xu Liu
Q. Li
Zhaomin Wu
Bingsheng He
FedML
11
53
0
14 Jun 2020
Detached Error Feedback for Distributed SGD with Random Sparsification
Detached Error Feedback for Distributed SGD with Random Sparsification
An Xu
Heng-Chiao Huang
31
9
0
11 Apr 2020
Uncertainty Principle for Communication Compression in Distributed and
  Federated Learning and the Search for an Optimal Compressor
Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor
M. Safaryan
Egor Shulgin
Peter Richtárik
18
60
0
20 Feb 2020
Natural Compression for Distributed Deep Learning
Natural Compression for Distributed Deep Learning
Samuel Horváth
Chen-Yu Ho
L. Horvath
Atal Narayan Sahu
Marco Canini
Peter Richtárik
13
148
0
27 May 2019
Faster Neural Network Training with Approximate Tensor Operations
Faster Neural Network Training with Approximate Tensor Operations
Menachem Adelman
Kfir Y. Levy
Ido Hakimi
M. Silberstein
18
26
0
21 May 2018
1