ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.05621
  4. Cited By
Proximal SCOPE for Distributed Sparse Learning: Better Data Partition
  Implies Faster Convergence Rate
v1v2 (latest)

Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

Neural Information Processing Systems (NeurIPS), 2018
15 March 2018
Shen-Yi Zhao
Gong-Duo Zhang
Ming-Wei Li
Wu-Jun Li
ArXiv (abs)PDFHTML

Papers citing "Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate"

6 / 6 papers shown
On the Optimal Batch Size for Byzantine-Robust Distributed Learning
On the Optimal Batch Size for Byzantine-Robust Distributed Learning
Yi-Rui Yang
Chang-Wei Shi
Wu-Jun Li
FedMLAAML
299
1
0
23 May 2023
FedREP: A Byzantine-Robust, Communication-Efficient and
  Privacy-Preserving Framework for Federated Learning
FedREP: A Byzantine-Robust, Communication-Efficient and Privacy-Preserving Framework for Federated Learning
Yi-Rui Yang
Kun Wang
Wulu Li
FedML
295
6
0
09 Mar 2023
Federated Coordinate Descent for Privacy-Preserving Multiparty Linear
  Regression
Federated Coordinate Descent for Privacy-Preserving Multiparty Linear Regression
Xinlin Leng
Chenxu Li
Weifeng Xu
Yuyan Sun
Hongtao Wang
FedML
337
1
0
16 Sep 2022
Buffered Asynchronous SGD for Byzantine Learning
Buffered Asynchronous SGD for Byzantine LearningJournal of machine learning research (JMLR), 2020
Yi-Rui Yang
Wu-Jun Li
FedML
272
6
0
02 Mar 2020
Global Momentum Compression for Sparse Communication in Distributed
  Learning
Global Momentum Compression for Sparse Communication in Distributed Learning
Chang-Wei Shi
Shen-Yi Zhao
Yin-Peng Xie
Hao Gao
Wu-Jun Li
384
1
0
30 May 2019
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data
Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra DataIEEE Transactions on Signal Processing (IEEE Trans. Signal Process.), 2019
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
FedML
345
29
0
29 May 2019
1
Page 1 of 1