38
v1v2v3 (latest)

Variance-Aware Sparse Linear Bandits

International Conference on Learning Representations (ICLR), 2022
Abstract

It is well-known that for sparse linear bandits, when ignoring the dependency on sparsity which is much smaller than the ambient dimension, the worst-case minimax regret is Θ~(dT)\widetilde{\Theta}\left(\sqrt{dT}\right) where dd is the ambient dimension and TT is the number of rounds. On the other hand, in the benign setting where there is no noise and the action set is the unit sphere, one can use divide-and-conquer to achieve O~(1)\widetilde{\mathcal O}(1) regret, which is (nearly) independent of dd and TT. In this paper, we present the first variance-aware regret guarantee for sparse linear bandits: O~(dt=1Tσt2+1)\widetilde{\mathcal O}\left(\sqrt{d\sum_{t=1}^T \sigma_t^2} + 1\right), where σt2\sigma_t^2 is the variance of the noise at the tt-th round. This bound naturally interpolates the regret bounds for the worst-case constant-variance regime (i.e., σtΩ(1)\sigma_t \equiv \Omega(1)) and the benign deterministic regimes (i.e., σt0\sigma_t \equiv 0). To achieve this variance-aware regret guarantee, we develop a general framework that converts any variance-aware linear bandit algorithm to a variance-aware algorithm for sparse linear bandits in a "black-box" manner. Specifically, we take two recent algorithms as black boxes to illustrate that the claimed bounds indeed hold, where the first algorithm can handle unknown-variance cases and the second one is more efficient.

View on arXiv
Comments on this paper