14
53

Impact of Representation Learning in Linear Bandits

Abstract

We study how representation learning can improve the efficiency of bandit problems. We study the setting where we play TT linear bandits with dimension dd concurrently, and these TT bandit tasks share a common k(d)k (\ll d) dimensional linear representation. For the finite-action setting, we present a new algorithm which achieves O~(TkN+dkNT)\widetilde{O}(T\sqrt{kN} + \sqrt{dkNT}) regret, where NN is the number of rounds we play for each bandit. When TT is sufficiently large, our algorithm significantly outperforms the naive algorithm (playing TT bandits independently) that achieves O~(TdN)\widetilde{O}(T\sqrt{d N}) regret. We also provide an Ω(TkN+dkNT)\Omega(T\sqrt{kN} + \sqrt{dkNT}) regret lower bound, showing that our algorithm is minimax-optimal up to poly-logarithmic factors. Furthermore, we extend our algorithm to the infinite-action setting and obtain a corresponding regret bound which demonstrates the benefit of representation learning in certain regimes. We also present experiments on synthetic and real-world data to illustrate our theoretical findings and demonstrate the effectiveness of our proposed algorithms.

View on arXiv
Comments on this paper