Impact of Representation Learning in Linear Bandits

We study how representation learning can improve the efficiency of bandit problems. We study the setting where we play linear bandits with dimension concurrently, and these bandit tasks share a common dimensional linear representation. For the finite-action setting, we present a new algorithm which achieves regret, where is the number of rounds we play for each bandit. When is sufficiently large, our algorithm significantly outperforms the naive algorithm (playing bandits independently) that achieves regret. We also provide an regret lower bound, showing that our algorithm is minimax-optimal up to poly-logarithmic factors. Furthermore, we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes. We also present experiments on synthetic and real-world data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms.
View on arXiv