22
2

Noise-Adaptive Confidence Sets for Linear Bandits and Application to Bayesian Optimization

Abstract

Adapting to a priori unknown noise level is a very important but challenging problem in sequential decision-making as efficient exploration typically requires knowledge of the noise level, which is often loosely specified. We report significant progress in addressing this issue for linear bandits in two respects. First, we propose a novel confidence set that is `semi-adaptive' to the unknown sub-Gaussian parameter σ2\sigma_*^2 in the sense that the (normalized) confidence width scales with dσ2+σ02\sqrt{d\sigma_*^2 + \sigma_0^2} where dd is the dimension and σ02\sigma_0^2 is the specified sub-Gaussian parameter (known) that can be much larger than σ2\sigma_*^2. This is a significant improvement over dσ02\sqrt{d\sigma_0^2} of the standard confidence set of Abbasi-Yadkori et al. (2011), especially when dd is large or σ2=0\sigma_*^2=0. We show that this leads to an improved regret bound in linear bandits. Second, for bounded rewards, we propose a novel variance-adaptive confidence set that has much improved numerical performance upon prior art. We then apply this confidence set to develop, as we claim, the first practical variance-adaptive linear bandit algorithm via an optimistic approach, which is enabled by our novel regret analysis technique. Both of our confidence sets rely critically on `regret equality' from online learning. Our empirical evaluation in diverse Bayesian optimization tasks shows that our proposed algorithms demonstrate better or comparable performance compared to existing methods.

View on arXiv
Comments on this paper