LC-Tsallis-INF: Generalized Best-of-Both-Worlds Linear Contextual Bandits

This study considers the linear contextual bandit problem with independent and identically distributed (i.i.d.) contexts. In this problem, existing studies have proposed Best-of-Both-Worlds (BoBW) algorithms whose regrets satisfy for the number of rounds in a stochastic regime with a suboptimality gap lower-bounded by a positive constant, while satisfying in an adversarial regime. However, the dependency on has room for improvement, and the suboptimality-gap assumption can be relaxed. For this issue, this study proposes an algorithm whose regret satisfies in the setting when the suboptimality gap is lower-bounded. Furthermore, we introduce a margin condition, a milder assumption on the suboptimality gap. That condition characterizes the problem difficulty linked to the suboptimality gap using a parameter . We then show that the algorithm's regret satisfies . Here, corresponds to the case in the existing studies where a lower bound exists in the suboptimality gap, and our regret satisfies in that case. Our proposed algorithm is based on the Follow-The-Regularized-Leader with the Tsallis entropy and referred to as the -Linear-Contextual (LC)-Tsallis-INF.
View on arXiv