45
14

Projection-free Distributed Online Learning with Strongly Convex Losses

Abstract

To efficiently solve distributed online learning problems with complicated constraints, previous studies have proposed several distributed projection-free algorithms. The state-of-the-art one achieves the O(T3/4)O({T}^{3/4}) regret bound with O(T)O(\sqrt{T}) communication complexity. In this paper, we further exploit the strong convexity of loss functions to improve the regret bound and communication complexity. Specifically, we first propose a distributed projection-free algorithm for strongly convex loss functions, which enjoys a better regret bound of O(T2/3logT)O(T^{2/3}\log T) with smaller communication complexity of O(T1/3)O(T^{1/3}). Furthermore, we demonstrate that the regret of distributed online algorithms with CC communication rounds has a lower bound of Ω(T/C)\Omega(T/C), even when the loss functions are strongly convex. This lower bound implies that the O(T1/3)O(T^{1/3}) communication complexity of our algorithm is nearly optimal for obtaining the O(T2/3logT)O(T^{2/3}\log T) regret bound up to polylogarithmic factors. Finally, we extend our algorithm into the bandit setting and obtain similar theoretical guarantees.

View on arXiv
Comments on this paper