300

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Abstract

Federated learning aims at conducting inference when data are decentralised and locally stored on several clients, under two main constraints: data ownership and communication overhead. In this paper, we address these issues under the Bayesian paradigm. To this end, we propose a novel Markov chain Monte Carlo algorithm coined \texttt{QLSD} built upon quantised versions of stochastic gradient Langevin dynamics. To improve performance in a big data regime, we introduce variance-reduced alternatives of our methodology referred to as \texttt{QLSD}^\star and \texttt{QLSD}++^{++}. We provide both non-asymptotic and asymptotic convergence guarantees for the proposed algorithms and illustrate their benefits on several federated learning benchmarks.

View on arXiv
Comments on this paper