ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.10111
  4. Cited By
Double Quantization for Communication-Efficient Distributed Optimization

Double Quantization for Communication-Efficient Distributed Optimization

25 May 2018
Yue Yu
Jiaxiang Wu
Longbo Huang
    MQ
ArXivPDFHTML

Papers citing "Double Quantization for Communication-Efficient Distributed Optimization"

10 / 10 papers shown
Title
Improved Convergence Analysis and SNR Control Strategies for Federated
  Learning in the Presence of Noise
Improved Convergence Analysis and SNR Control Strategies for Federated Learning in the Presence of Noise
Antesh Upadhyay
Abolfazl Hashemi
29
9
0
14 Jul 2023
Distributed Optimization Methods for Multi-Robot Systems: Part II -- A
  Survey
Distributed Optimization Methods for Multi-Robot Systems: Part II -- A Survey
O. Shorinwa
Trevor Halsted
Javier Yu
Mac Schwager
16
18
0
26 Jan 2023
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware
  Communication Compression
Optimus-CC: Efficient Large NLP Model Training with 3D Parallelism Aware Communication Compression
Jaeyong Song
Jinkyu Yim
Jaewon Jung
Hongsun Jang
H. Kim
Youngsok Kim
Jinho Lee
GNN
14
25
0
24 Jan 2023
Efficient and Light-Weight Federated Learning via Asynchronous
  Distributed Dropout
Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout
Chen Dun
Mirian Hipolito Garcia
C. Jermaine
Dimitrios Dimitriadis
Anastasios Kyrillidis
56
20
0
28 Oct 2022
QC-ODKLA: Quantized and Communication-Censored Online Decentralized
  Kernel Learning via Linearized ADMM
QC-ODKLA: Quantized and Communication-Censored Online Decentralized Kernel Learning via Linearized ADMM
Ping Xu
Yue Wang
Xiang Chen
Zhi Tian
13
2
0
04 Aug 2022
Distributed Adversarial Training to Robustify Deep Neural Networks at
  Scale
Distributed Adversarial Training to Robustify Deep Neural Networks at Scale
Gaoyuan Zhang
Songtao Lu
Yihua Zhang
Xiangyi Chen
Pin-Yu Chen
Quanfu Fan
Lee Martie
L. Horesh
Min-Fong Hong
Sijia Liu
OOD
22
12
0
13 Jun 2022
ProgFed: Effective, Communication, and Computation Efficient Federated
  Learning by Progressive Training
ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
Hui-Po Wang
Sebastian U. Stich
Yang He
Mario Fritz
FedML
AI4CE
28
46
0
11 Oct 2021
ErrorCompensatedX: error compensation for variance reduced algorithms
ErrorCompensatedX: error compensation for variance reduced algorithms
Hanlin Tang
Yao Li
Ji Liu
Ming Yan
24
9
0
04 Aug 2021
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
ResIST: Layer-Wise Decomposition of ResNets for Distributed Training
Chen Dun
Cameron R. Wolfe
C. Jermaine
Anastasios Kyrillidis
16
21
0
02 Jul 2021
Communication optimization strategies for distributed deep neural
  network training: A survey
Communication optimization strategies for distributed deep neural network training: A survey
Shuo Ouyang
Dezun Dong
Yemao Xu
Liquan Xiao
17
12
0
06 Mar 2020
1