12
51

Near-optimal Representation Learning for Linear Bandits and Linear RL

Abstract

This paper studies representation learning for multi-task linear bandits and multi-task episodic RL with linear value function approximation. We first consider the setting where we play MM linear bandits with dimension dd concurrently, and these bandits share a common kk-dimensional linear representation so that kdk\ll d and kMk \ll M. We propose a sample-efficient algorithm, MTLR-OFUL, which leverages the shared representation to achieve O~(MdkT+dkMT)\tilde{O}(M\sqrt{dkT} + d\sqrt{kMT} ) regret, with TT being the number of total steps. Our regret significantly improves upon the baseline O~(MdT)\tilde{O}(Md\sqrt{T}) achieved by solving each task independently. We further develop a lower bound that shows our regret is near-optimal when d>Md > M. Furthermore, we extend the algorithm and analysis to multi-task episodic RL with linear value function approximation under low inherent Bellman error \citep{zanette2020learning}. To the best of our knowledge, this is the first theoretical result that characterizes the benefits of multi-task representation learning for exploration in RL with function approximation.

View on arXiv
Comments on this paper