19
0

LC-Tsallis-INF: Generalized Best-of-Both-Worlds Linear Contextual Bandits

Abstract

This study considers the linear contextual bandit problem with independent and identically distributed (i.i.d.) contexts. In this problem, existing studies have proposed Best-of-Both-Worlds (BoBW) algorithms whose regrets satisfy O(log2(T))O(\log^2(T)) for the number of rounds TT in a stochastic regime with a suboptimality gap lower-bounded by a positive constant, while satisfying O(T)O(\sqrt{T}) in an adversarial regime. However, the dependency on TT has room for improvement, and the suboptimality-gap assumption can be relaxed. For this issue, this study proposes an algorithm whose regret satisfies O(log(T))O(\log(T)) in the setting when the suboptimality gap is lower-bounded. Furthermore, we introduce a margin condition, a milder assumption on the suboptimality gap. That condition characterizes the problem difficulty linked to the suboptimality gap using a parameter β(0,]\beta \in (0, \infty]. We then show that the algorithm's regret satisfies O({log(T)}1+β2+βT12+β)O\left(\left\{\log(T)\right\}^{\frac{1+\beta}{2+\beta}}T^{\frac{1}{2+\beta}}\right). Here, β=\beta= \infty corresponds to the case in the existing studies where a lower bound exists in the suboptimality gap, and our regret satisfies O(log(T))O(\log(T)) in that case. Our proposed algorithm is based on the Follow-The-Regularized-Leader with the Tsallis entropy and referred to as the α\alpha-Linear-Contextual (LC)-Tsallis-INF.

View on arXiv
Comments on this paper