20
7

Faster Federated Learning with Decaying Number of Local SGD Steps

Abstract

In Federated Learning (FL) client devices connected over the internet collaboratively train a machine learning model without sharing their private data with a central server or with other clients. The seminal Federated Averaging (FedAvg) algorithm trains a single global model by performing rounds of local training on clients followed by model averaging. FedAvg can improve the communication-efficiency of training by performing more steps of Stochastic Gradient Descent (SGD) on clients in each round. However, client data in real-world FL is highly heterogeneous, which has been extensively shown to slow model convergence and harm final performance when K>1K > 1 steps of SGD are performed on clients per round. In this work we propose decaying KK as training progresses, which can jointly improve the final performance of the FL model whilst reducing the wall-clock time and the total computational cost of training compared to using a fixed KK. We analyse the convergence of FedAvg with decaying KK for strongly-convex objectives, providing novel insights into the convergence properties, and derive three theoretically-motivated decay schedules for KK. We then perform thorough experiments on four benchmark FL datasets (FEMNIST, CIFAR100, Sentiment140, Shakespeare) to show the real-world benefit of our approaches in terms of real-world convergence time, computational cost, and generalisation performance.

View on arXiv
Comments on this paper