Projection-free Distributed Online Learning with Sublinear Communication Complexity

To deal with complicated constraints via locally light computations in distributed online learning, a recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an regret bound for convex losses, where is the number of total rounds. However, it requires communication rounds, and cannot utilize the strong convexity of losses. In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same regret bound with only communication rounds for convex losses, and a better regret bound of with fewer communication rounds for strongly convex losses. The key idea is to adopt a delayed update mechanism that reduces the communication complexity, and redefine the surrogate loss function in D-OCG for exploiting the strong convexity. Furthermore, we provide lower bounds to demonstrate that the communication rounds required by D-BOCG are optimal (in terms of ) for achieving the regret with convex losses, and the communication rounds required by D-BOCG are near-optimal (in terms of ) for achieving the regret with strongly convex losses up to polylogarithmic factors. Finally, to handle the more challenging bandit setting, in which only the loss value is available, we incorporate the classical one-point gradient estimator into D-BOCG, and obtain similar theoretical guarantees.
View on arXiv