128
v1v2 (latest)

Rising Rested Bandits: Lower Bounds and Efficient Algorithms

Main:22 Pages
19 Figures
Bibliography:5 Pages
1 Tables
Appendix:36 Pages
Abstract

This paper is in the field of stochastic Multi-Armed Bandits (MABs), i.e. those sequential selection techniques able to learn online using only the feedback given by the chosen option (a.k.a. armarm). We study a particular case of the rested bandits in which the arms' expected reward is monotonically non-decreasing and concave. We study the inherent sample complexity of the regret minimization problem by deriving suitable regret lower bounds. Then, we design an algorithm for the rested case R-ed-UCB\textit{R-ed-UCB}, providing a regret bound depending on the properties of the instance and, under certain circumstances, of O~(T23)\widetilde{\mathcal{O}}(T^{\frac{2}{3}}). We empirically compare our algorithms with state-of-the-art methods for non-stationary MABs over several synthetically generated tasks and an online model selection problem for a real-world dataset

View on arXiv
Comments on this paper