23
9

Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost

Abstract

We study distributed contextual linear bandits with stochastic contexts, where NN agents act cooperatively to solve a linear bandit-optimization problem with dd-dimensional features over the course of TT rounds. For this problem, we derive the first ever information-theoretic lower bound Ω(dN)\Omega(dN) on the communication cost of any algorithm that performs optimally in a regret minimization setup. We then propose a distributed batch elimination version of the LinUCB algorithm, DisBE-LUCB, where the agents share information among each other through a central server. We prove that the communication cost of DisBE-LUCB matches our lower bound up to logarithmic factors. In particular, for scenarios with known context distribution, the communication cost of DisBE-LUCB is only O~(dN)\tilde{\mathcal{O}}(dN) and its regret is O~(dNT){\tilde{\mathcal{O}}}(\sqrt{dNT}), which is of the same order as that incurred by an optimal single-agent algorithm for NTNT rounds. We also provide similar bounds for practical settings where the context distribution can only be estimated. Therefore, our proposed algorithm is nearly minimax optimal in terms of \emph{both regret and communication cost}. Finally, we propose DecBE-LUCB, a fully decentralized version of DisBE-LUCB, which operates without a central server, where agents share information with their \emph{immediate neighbors} through a carefully designed consensus procedure.

View on arXiv
Comments on this paper