We derive an algorithm that achieves the optimal (within constants) pseudo-regret in both adversarial and stochastic multi-armed bandits without prior knowledge of the regime and time horizon. 1 The algorithm is based on online mirror descent with Tsallis entropy regularizer. We provide a complete characterization of such algorithms and show that Tsallis entropy with power achieves the goal. In addition, the proposed algorithm enjoys improved regret guarantees in two intermediate regimes: stochastic bandits with adversarial corruptions introduced by Lykouris et al., and the stochastically constrained adversary studied by Wei and Luo. The algorithm also achieves adversarial and stochastic optimality in the utility-based dueling bandit setting. We provide empirical evaluation of the algorithm demonstrating that it outperforms UCB1 and EXP3 in stochastic environments. We also provide examples of adversarial environments, where UCB1 and Thompson Sampling exhibit almost linear regret, whereas our algorithm suffers only "logarithmic" regret.
View on arXiv