21
5

Federated Combinatorial Multi-Agent Multi-Armed Bandits

Abstract

This paper introduces a federated learning framework tailored for online combinatorial optimization with bandit feedback. In this setting, agents select subsets of arms, observe noisy rewards for these subsets without accessing individual arm information, and can cooperate and share information at specific intervals. Our framework transforms any offline resilient single-agent (αϵ)(\alpha-\epsilon)-approximation algorithm, having a complexity of O~(ψϵβ)\tilde{\mathcal{O}}(\frac{\psi}{\epsilon^\beta}), where the logarithm is omitted, for some function ψ\psi and constant β\beta, into an online multi-agent algorithm with mm communicating agents and an α\alpha-regret of no more than O~(m13+βψ13+βT2+β3+β)\tilde{\mathcal{O}}(m^{-\frac{1}{3+\beta}} \psi^\frac{1}{3+\beta} T^\frac{2+\beta}{3+\beta}). This approach not only eliminates the ϵ\epsilon approximation error but also ensures sublinear growth with respect to the time horizon TT and demonstrates a linear speedup with an increasing number of communicating agents. Additionally, the algorithm is notably communication-efficient, requiring only a sublinear number of communication rounds, quantified as O~(ψTββ+1)\tilde{\mathcal{O}}\left(\psi T^\frac{\beta}{\beta+1}\right). Furthermore, the framework has been successfully applied to online stochastic submodular maximization using various offline algorithms, yielding the first results for both single-agent and multi-agent settings and recovering specialized single-agent theoretical guarantees. We empirically validate our approach to a stochastic data summarization problem, illustrating the effectiveness of the proposed framework, even in single-agent scenarios.

View on arXiv
Comments on this paper