282

Approximation Methods for Kernelized Bandits

International Conference on Machine Learning (ICML), 2020
Abstract

The RKHS bandit problem (also called kernelized multi-armed bandit problem) is an online optimization problem of non-linear functions with noisy feedback. Although the problem has been extensively studied, there are unsatisfactory results for some problems compared to the well-studied linear bandit case. Specifically, there is no general algorithm for the adversarial RKHS bandit problem. In addition, high computational complexity of existing algorithms hinders practical application. We address these issues by considering a novel amalgamation of approximation theory and the misspecified linear bandit problem. Using an approximation method, we propose efficient algorithms for the stochastic RKHS bandit problem and the first general algorithm for the adversarial RKHS bandit problem. Furthermore, we empirically confirm one of our theoretical results, i.e., we demonstrate that our proposed method has comparable cumulative regret to IGP-UCB and its running time is much shorter.

View on arXiv
Comments on this paper