408

Parameter and Feature Selection in Stochastic Linear Bandits

International Conference on Machine Learning (ICML), 2021
Abstract

We study two model selection settings in stochastic linear bandits (LB). In the first setting, which we refer to as feature selection, the expected reward of the LB problem is in the linear span of at least one of MM feature maps (models). In the second setting, the reward parameter of the LB problem is arbitrarily selected from MM models represented as (possibly) overlapping balls in Rd\mathbb R^d. However, the agent only has access to misspecified models, i.e., estimates of the centers and radii of the balls. We refer to this setting as parameter selection. For each setting, we develop and analyze an algorithm that is based on a reduction from bandits to full-information problems. This allows us to obtain regret bounds that are not worse (up to a logM\sqrt{\log M} factor) than the case where the true model is known. The regret of our parameter selection algorithm also scales logarithmically with model uncertainty. Finally, we empirically show the effectiveness of our algorithms using synthetic and real-world experiments.

View on arXiv
Comments on this paper