ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.12648
  4. Cited By
Convergence of Distributed Stochastic Variance Reduced Methods without
  Sampling Extra Data

Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data

29 May 2019
Shicong Cen
Huishuai Zhang
Yuejie Chi
Wei-neng Chen
Tie-Yan Liu
    FedML
ArXivPDFHTML

Papers citing "Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data"

5 / 5 papers shown
Title
Asynchronous Training Schemes in Distributed Learning with Time Delay
Asynchronous Training Schemes in Distributed Learning with Time Delay
Haoxiang Wang
Zhanhong Jiang
Chao Liu
Soumik Sarkar
D. Jiang
Young M. Lee
42
2
0
28 Aug 2022
Communication-efficient Distributed Newton-like Optimization with
  Gradients and M-estimators
Communication-efficient Distributed Newton-like Optimization with Gradients and M-estimators
Ziyan Yin
37
0
0
13 Jul 2022
Distributed Newton-Type Methods with Communication Compression and
  Bernoulli Aggregation
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
45
16
0
07 Jun 2022
FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
  to Non-IID Data
FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data
Xinwei Zhang
Mingyi Hong
S. Dhople
W. Yin
Yang Liu
FedML
23
228
0
22 May 2020
A Proximal Stochastic Gradient Method with Progressive Variance
  Reduction
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
93
737
0
19 Mar 2014
1