30
4

How Does Variance Shape the Regret in Contextual Bandits?

Abstract

We consider realizable contextual bandits with general function approximation, investigating how small reward variance can lead to better-than-minimax regret bounds. Unlike in minimax bounds, we show that the eluder dimension delud_\text{elu}-a complexity measure of the function class-plays a crucial role in variance-dependent bounds. We consider two types of adversary: (1) Weak adversary: The adversary sets the reward variance before observing the learner's action. In this setting, we prove that a regret of Ω(min{A,delu}Λ+delu)\Omega(\sqrt{\min\{A,d_\text{elu}\}\Lambda}+d_\text{elu}) is unavoidable when deluATd_{\text{elu}}\leq\sqrt{AT}, where AA is the number of actions, TT is the total number of rounds, and Λ\Lambda is the total variance over TT rounds. For the AdeluA\leq d_\text{elu} regime, we derive a nearly matching upper bound O~(AΛ+delu)\tilde{O}(\sqrt{A\Lambda}+d_\text{elu}) for the special case where the variance is revealed at the beginning of each round. (2) Strong adversary: The adversary sets the reward variance after observing the learner's action. We show that a regret of Ω(deluΛ+delu)\Omega(\sqrt{d_\text{elu}\Lambda}+d_\text{elu}) is unavoidable when deluΛ+deluAT\sqrt{d_\text{elu}\Lambda}+d_\text{elu}\leq\sqrt{AT}. In this setting, we provide an upper bound of order O~(deluΛ+delu)\tilde{O}(d_\text{elu}\sqrt{\Lambda}+d_\text{elu}). Furthermore, we examine the setting where the function class additionally provides distributional information of the reward, as studied by Wang et al. (2024). We demonstrate that the regret bound O~(deluΛ+delu)\tilde{O}(\sqrt{d_\text{elu}\Lambda}+d_\text{elu}) established in their work is unimprovable when deluΛ+deluAT\sqrt{d_{\text{elu}}\Lambda}+d_\text{elu}\leq\sqrt{AT}. However, with a slightly different definition of the total variance and with the assumption that the reward follows a Gaussian distribution, one can achieve a regret of O~(AΛ+delu)\tilde{O}(\sqrt{A\Lambda}+d_\text{elu}).

View on arXiv
Comments on this paper