ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.05621
  4. Cited By
Proximal SCOPE for Distributed Sparse Learning: Better Data Partition
  Implies Faster Convergence Rate

Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate

15 March 2018
Shen-Yi Zhao
Gong-Duo Zhang
Ming-Wei Li
Wu-Jun Li
ArXivPDFHTML

Papers citing "Proximal SCOPE for Distributed Sparse Learning: Better Data Partition Implies Faster Convergence Rate"

4 / 4 papers shown
Title
On the Optimal Batch Size for Byzantine-Robust Distributed Learning
On the Optimal Batch Size for Byzantine-Robust Distributed Learning
Yi-Rui Yang
Chang-Wei Shi
Wu-Jun Li
FedML
AAML
32
0
0
23 May 2023
FedREP: A Byzantine-Robust, Communication-Efficient and
  Privacy-Preserving Framework for Federated Learning
FedREP: A Byzantine-Robust, Communication-Efficient and Privacy-Preserving Framework for Federated Learning
Yi-Rui Yang
Kun Wang
Wulu Li
FedML
52
3
0
09 Mar 2023
Buffered Asynchronous SGD for Byzantine Learning
Buffered Asynchronous SGD for Byzantine Learning
Yi-Rui Yang
Wu-Jun Li
FedML
31
5
0
02 Mar 2020
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1