Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1905.04346
Cited By
On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization
10 May 2019
Hao Yu
R. L. Jin
Re-assign community
ArXiv
PDF
HTML
Papers citing
"On the Computation and Communication Complexity of Parallel SGD with Dynamic Batch Sizes for Stochastic Non-Convex Optimization"
9 / 9 papers shown
Title
FedBIAD: Communication-Efficient and Accuracy-Guaranteed Federated Learning with Bayesian Inference-Based Adaptive Dropout
Jingjing Xue
Min Liu
Sheng Sun
Yuwei Wang
Hui Jiang
Xue Jiang
18
7
0
14 Jul 2023
Taming Resource Heterogeneity In Distributed ML Training With Dynamic Batching
S. Tyagi
Prateek Sharma
16
22
0
20 May 2023
What Do We Mean by Generalization in Federated Learning?
Honglin Yuan
Warren Morningstar
Lin Ning
K. Singhal
OOD
FedML
35
71
0
27 Oct 2021
Federated Composite Optimization
Honglin Yuan
Manzil Zaheer
Sashank J. Reddi
FedML
29
58
0
17 Nov 2020
Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Marten van Dijk
Nhuong V. Nguyen
Toan N. Nguyen
Lam M. Nguyen
Quoc Tran-Dinh
Phuong Ha Nguyen
FedML
34
10
0
27 Oct 2020
Distributed Non-Convex Optimization with Sublinear Speedup under Intermittent Client Availability
Yikai Yan
Chaoyue Niu
Yucheng Ding
Zhenzhe Zheng
Fan Wu
Guihai Chen
Shaojie Tang
Zhihua Wu
FedML
36
37
0
18 Feb 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method
Simon Lacoste-Julien
Mark W. Schmidt
Francis R. Bach
124
259
0
10 Dec 2012
Optimal Distributed Online Prediction using Mini-Batches
O. Dekel
Ran Gilad-Bachrach
Ohad Shamir
Lin Xiao
171
683
0
07 Dec 2010
1