Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.12648
Cited By
Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
29 May 2019
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data"
5 / 5 papers shown
Title
Asynchronous Training Schemes in Distributed Learning with Time Delay
Haoxiang Wang
Zhanhong Jiang
Chao Liu
Soumik Sarkar
D. Jiang
Young M. Lee
42
2
0
28 Aug 2022
Communication-efficient Distributed Newton-like Optimization with Gradients and M-estimators
Ziyan Yin
37
0
0
13 Jul 2022
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
42
16
0
07 Jun 2022
FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data
Xinwei Zhang
Mingyi Hong
S. Dhople
W. Yin
Yang Liu
FedML
23
228
0
22 May 2020
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1