68

FedAVOT: Exact Distribution Alignment in Federated Learning via Masked Optimal Transport

Main:4 Pages
1 Figures
Bibliography:1 Pages
Abstract

Federated Learning (FL) allows distributed model training without sharing raw data, but suffers when client participation is partial. In practice, the distribution of available users (\emph{availability distribution} qq) rarely aligns with the distribution defining the optimization objective (\emph{importance distribution} pp), leading to biased and unstable updates under classical FedAvg. We propose \textbf{Fereated AVerage with Optimal Transport (\textbf{FedAVOT})}, which formulates aggregation as a masked optimal transport problem aligning qq and pp. Using Sinkhorn scaling, \textbf{FedAVOT} computes transport-based aggregation weights with provable convergence guarantees. \textbf{FedAVOT} achieves a standard O(1/T)\mathcal{O}(1/\sqrt{T}) rate under a nonsmooth convex FL setting, independent of the number of participating users per round. Our experiments confirm drastically improved performance compared to FedAvg across heterogeneous, fairness-sensitive, and low-availability regimes, even when only two clients participate per round.

View on arXiv
Comments on this paper