99
60

Sparsity, variance and curvature in multi-armed bandits

Abstract

In (online) learning theory the concepts of sparsity, variance and curvature are well-understood and are routinely used to obtain refined regret and generalization bounds. In this paper we further our understanding of these concepts in the more challenging limited feedback scenario. We consider the adversarial multi-armed bandit and linear bandit settings and solve several open problems pertaining to the existence of algorithms with favorable regret bounds under the following assumptions: (i) sparsity of the individual losses, (ii) small variation of the loss sequence, and (iii) curvature of the action set. Specifically we show that (i) for ss-sparse losses one can obtain O~(sT)\tilde{O}(\sqrt{s T})-regret (solving an open problem by Kwon and Perchet), (ii) for loss sequences with variation bounded by QQ one can obtain O~(Q)\tilde{O}(\sqrt{Q})-regret (solving an open problem by Kale and Hazan), and (iii) for linear bandit on an pn\ell_p^n ball one can obtain O~(nT)\tilde{O}(\sqrt{n T})-regret for p[1,2]p \in [1,2] and one has Ω~(nT)\tilde{\Omega}(n \sqrt{T})-regret for p>2p>2 (solving an open problem by Bubeck, Cesa-Bianchi and Kakade). A key new insight to obtain these results is to use regularizers satisfying more refined conditions than general self-concordance

View on arXiv
Comments on this paper