Smoothness-Adaptive Contextual Bandits

We study a non-parametric multi-armed bandit problem with stochastic covariates, where a key driver of complexity is the smoothness with which the payoff functions vary with covariates. Previous studies have derived minimax-optimal algorithms in cases where it is a priori known how smooth the payoff functions are. In practice, however, advance information about the smoothness of payoff functions is typically not available, and misspecification of smoothness may severely deteriorate the performance of existing methods. In this work, we consider a framework where the smoothness is not known a priori, and study when and how algorithms may adapt to unknown smoothness. First, we establish that, in general, designing bandit algorithms that adapt to the unknown smoothness of payoff functions is impossible. We overcome this impossibility result by leveraging the notion of self-similarity, a concept from the statistics literature that is traditionally invoked to enable adaptive confidence intervals. Under a self-similarity assumption, we develop a policy for inferring the smoothness of the payoff functions using observations that are collected throughout the decision-making process, and establish that this policy matches (up to a logarithmic scale) the regret rate that is achievable when smoothness is known a priori. Finally, we extend our method to account for local notions of smoothness and show that, under reasonable assumptions, our method achieves performance characterized by the local complexity of the problem as opposed to its global complexity.
View on arXiv