ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.09969
  4. Cited By
Towards More Efficient Stochastic Decentralized Learning: Faster
  Convergence and Sparse Communication

Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication

25 May 2018
Zebang Shen
Aryan Mokhtari
Tengfei Zhou
P. Zhao
Hui Qian
ArXivPDFHTML

Papers citing "Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication"

12 / 12 papers shown
Title
Decentralized Sum-of-Nonconvex Optimization
Decentralized Sum-of-Nonconvex Optimization
Zhuanghua Liu
K. H. Low
21
0
0
04 Feb 2024
Analysis of Error Feedback in Federated Non-Convex Optimization with
  Biased Compression
Analysis of Error Feedback in Federated Non-Convex Optimization with Biased Compression
Xiaoyun Li
Ping Li
FedML
39
4
0
25 Nov 2022
Explicit Second-Order Min-Max Optimization Methods with Optimal
  Convergence Guarantee
Explicit Second-Order Min-Max Optimization Methods with Optimal Convergence Guarantee
Tianyi Lin
P. Mertikopoulos
Michael I. Jordan
38
11
0
23 Oct 2022
1-bit Adam: Communication Efficient Large-Scale Training with Adam's
  Convergence Speed
1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Hanlin Tang
Shaoduo Gan
A. A. Awan
Samyam Rajbhandari
Conglong Li
Xiangru Lian
Ji Liu
Ce Zhang
Yuxiong He
AI4CE
45
84
0
04 Feb 2021
PMGT-VR: A decentralized proximal-gradient algorithmic framework with
  variance reduction
PMGT-VR: A decentralized proximal-gradient algorithmic framework with variance reduction
Haishan Ye
Wei Xiong
Tong Zhang
16
16
0
30 Dec 2020
On the Benefits of Multiple Gossip Steps in Communication-Constrained
  Decentralized Optimization
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization
Abolfazl Hashemi
Anish Acharya
Rudrajit Das
H. Vikalo
Sujay Sanghavi
Inderjit Dhillon
20
7
0
20 Nov 2020
Gradient tracking and variance reduction for decentralized optimization
  and machine learning
Gradient tracking and variance reduction for decentralized optimization and machine learning
Ran Xin
S. Kar
U. Khan
19
10
0
13 Feb 2020
A Decentralized Proximal Point-type Method for Saddle Point Problems
A Decentralized Proximal Point-type Method for Saddle Point Problems
Weijie Liu
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
Zebang Shen
Nenggan Zheng
72
30
0
31 Oct 2019
Central Server Free Federated Learning over Single-sided Trust Social
  Networks
Central Server Free Federated Learning over Single-sided Trust Social Networks
Chaoyang He
Conghui Tan
Hanlin Tang
Shuang Qiu
Ji Liu
FedML
23
73
0
11 Oct 2019
Variance-Reduced Decentralized Stochastic Optimization with Gradient
  Tracking -- Part II: GT-SVRG
Variance-Reduced Decentralized Stochastic Optimization with Gradient Tracking -- Part II: GT-SVRG
Ran Xin
U. Khan
S. Kar
22
8
0
08 Oct 2019
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition
  Sampling
MATCHA: Speeding Up Decentralized SGD via Matching Decomposition Sampling
Jianyu Wang
Anit Kumar Sahu
Zhouyi Yang
Gauri Joshi
S. Kar
29
159
0
23 May 2019
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly
  Convex Distributed Finite Sums
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
FedML
18
26
0
28 Jan 2019
1