496

Cooperative Online Learning: Keeping your Neighbors Updated

Abstract

We study an asynchronous online learning setting with a network of agents. At each time step, some of the agents are activated, requested to make a prediction, and pay the corresponding loss. The loss function is then revealed to these agents and also to their neighbors in the network. When activations are stochastic, we show that the regret achieved by NN agents running the standard Online Mirror Descent is O(αT)\mathcal{O}(\sqrt{\alpha T}), where TT is the horizon and αN\alpha \le N is the independence number of the network. This is in contrast to the regret Ω(NT)\Omega(\sqrt{N T}) which NN agents incur in the same setting when feedback is not shared. We also show a matching lower bound of order αT\sqrt{\alpha T} that holds for any given network. When the pattern of agent activations is arbitrary, the problem changes significantly: we prove a Ω(T)\Omega(T) lower bound on the regret that holds for any online algorithm oblivious to the feedback source.

View on arXiv
Comments on this paper