ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.06752
  4. Cited By
A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex
  Optimization
v1v2 (latest)

A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization

International Conference on Machine Learning (ICML), 2021
12 February 2021
Ran Xin
U. Khan
S. Kar
ArXiv (abs)PDFHTML

Papers citing "A Hybrid Variance-Reduced Method for Decentralized Stochastic Non-Convex Optimization"

21 / 21 papers shown
A Hybrid Stochastic Gradient Tracking Method for Distributed Online Optimization Over Time-Varying Directed Networks
A Hybrid Stochastic Gradient Tracking Method for Distributed Online Optimization Over Time-Varying Directed Networks
Xinli Shi
Xingxing Yuan
Longkang Zhu
G. Wen
134
0
0
28 Aug 2025
Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach
Enhancing Privacy in Decentralized Min-Max Optimization: A Differentially Private Approach
Yueyang Quan
Chang Wang
Shengjie Zhai
Minghong Fang
Zhuqing Liu
140
0
0
10 Aug 2025
Faster Adaptive Decentralized Learning Algorithms
Faster Adaptive Decentralized Learning AlgorithmsInternational Conference on Machine Learning (ICML), 2024
Feihu Huang
Jianyu Zhao
303
3
0
19 Aug 2024
The Effectiveness of Local Updates for Decentralized Learning under Data
  Heterogeneity
The Effectiveness of Local Updates for Decentralized Learning under Data HeterogeneityIEEE Transactions on Signal Processing (IEEE TSP), 2024
Tongle Wu
Ying Sun
231
6
0
23 Mar 2024
Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex
  Optimization
Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex OptimizationAAAI Conference on Artificial Intelligence (AAAI), 2023
Zhenwei Lin
Jingfan Xia
Qi Deng
Luo Luo
232
9
0
18 Oct 2023
Serverless Federated AUPRC Optimization for Multi-Party Collaborative
  Imbalanced Data Mining
Serverless Federated AUPRC Optimization for Multi-Party Collaborative Imbalanced Data MiningKnowledge Discovery and Data Mining (KDD), 2023
Xidong Wu
Zhengmian Hu
Jian Pei
Heng Huang
250
13
0
06 Aug 2023
Decentralized Local Updates with Dual-Slow Estimation and Momentum-based
  Variance-Reduction for Non-Convex Optimization
Decentralized Local Updates with Dual-Slow Estimation and Momentum-based Variance-Reduction for Non-Convex OptimizationEuropean Conference on Artificial Intelligence (ECAI), 2023
Kangyang Luo
Kunkun Zhang
Sheng Zhang
Xiang Li
Ming Gao
156
2
0
17 Jul 2023
Variance-reduced accelerated methods for decentralized stochastic
  double-regularized nonconvex strongly-concave minimax problems
Variance-reduced accelerated methods for decentralized stochastic double-regularized nonconvex strongly-concave minimax problems
Gabriel Mancino-Ball
Yangyang Xu
464
9
0
14 Jul 2023
Distributed Random Reshuffling Methods with Improved Convergence
Distributed Random Reshuffling Methods with Improved ConvergenceIEEE Transactions on Automatic Control (TAC), 2023
Kun-Yen Huang
Linli Zhou
Shi Pu
403
5
0
21 Jun 2023
Near-Optimal Decentralized Momentum Method for Nonconvex-PL Minimax
  Problems
Near-Optimal Decentralized Momentum Method for Nonconvex-PL Minimax Problems
Feihu Huang
Songcan Chen
211
8
0
21 Apr 2023
A Unified Momentum-based Paradigm of Decentralized SGD for Non-Convex
  Models and Heterogeneous Data
A Unified Momentum-based Paradigm of Decentralized SGD for Non-Convex Models and Heterogeneous Data
Haizhou Du
Chengdong Ni
183
3
0
01 Mar 2023
GradMA: A Gradient-Memory-based Accelerated Federated Learning with
  Alleviated Catastrophic Forgetting
GradMA: A Gradient-Memory-based Accelerated Federated Learning with Alleviated Catastrophic ForgettingComputer Vision and Pattern Recognition (CVPR), 2023
Kangyang Luo
Xiang Li
Yunshi Lan
Ming Gao
FedML
718
64
0
28 Feb 2023
DIAMOND: Taming Sample and Communication Complexities in Decentralized
  Bilevel Optimization
DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel OptimizationIEEE Conference on Computer Communications (INFOCOM), 2022
Pei-Yuan Qiu
Yining Li
Zhuqing Liu
Prashant Khanduri
Jia Liu
Ness B. Shroff
Elizabeth S. Bentley
K. Turck
487
4
0
05 Dec 2022
On the Convergence of Distributed Stochastic Bilevel Optimization
  Algorithms over a Network
On the Convergence of Distributed Stochastic Bilevel Optimization Algorithms over a NetworkInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2022
Hongchang Gao
Bin Gu
My T. Thai
320
23
0
30 Jun 2022
Distributed saddle point problems for strongly concave-convex functions
Distributed saddle point problems for strongly concave-convex functionsIEEE Transactions on Signal and Information Processing over Networks (TSIPN), 2022
Muhammad I. Qureshi
U. Khan
479
13
0
11 Feb 2022
DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample
  Complexity
DoCoM: Compressed Decentralized Optimization with Near-Optimal Sample Complexity
Chung-Yiu Yau
Hoi-To Wai
362
9
0
01 Feb 2022
Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized
  Learning: Part I
Variance-Reduced Stochastic Quasi-Newton Methods for Decentralized Learning: Part IIEEE Transactions on Signal Processing (IEEE Trans. Signal Process.), 2022
Jiaojiao Zhang
Huikang Liu
Anthony Man-Cho So
Qing Ling
282
20
0
19 Jan 2022
MDPGT: Momentum-based Decentralized Policy Gradient Tracking
MDPGT: Momentum-based Decentralized Policy Gradient TrackingAAAI Conference on Artificial Intelligence (AAAI), 2021
Zhanhong Jiang
Xian Yeow Lee
Sin Yong Tan
Kai Liang Tan
Aditya Balu
Young M. Lee
Chinmay Hegde
Soumik Sarkar
249
11
0
06 Dec 2021
A Unified and Refined Convergence Analysis for Non-Convex Decentralized
  Learning
A Unified and Refined Convergence Analysis for Non-Convex Decentralized Learning
Sulaiman A. Alghunaim
Kun Yuan
262
84
0
19 Oct 2021
Distributed stochastic gradient tracking algorithm with variance
  reduction for non-convex optimization
Distributed stochastic gradient tracking algorithm with variance reduction for non-convex optimizationIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021
Xia Jiang
Xianlin Zeng
Jian Sun
Jie Chen
182
21
0
28 Jun 2021
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized TrainingInternational Conference on Machine Learning (ICML), 2020
Yucheng Lu
Christopher De Sa
539
94
0
15 Jun 2020
1
Page 1 of 1