18
283

Reinforcement Learning in Feature Space: Matrix Bandit, Kernels, and Regret Bound

Abstract

Exploration in reinforcement learning (RL) suffers from the curse of dimensionality when the state-action space is large. A common practice is to parameterize the high-dimensional value and policy functions using given features. However existing methods either have no theoretical guarantee or suffer a regret that is exponential in the planning horizon HH. In this paper, we propose an online RL algorithm, namely the MatrixRL, that leverages ideas from linear bandit to learn a low-dimensional representation of the probability transition model while carefully balancing the exploitation-exploration tradeoff. We show that MatrixRL achieves a regret bound O(H2dlogTT){O}\big(H^2d\log T\sqrt{T}\big) where dd is the number of features. MatrixRL has an equivalent kernelized version, which is able to work with an arbitrary kernel Hilbert space without using explicit features. In this case, the kernelized MatrixRL satisfies a regret bound O(H2d~logTT){O}\big(H^2\widetilde{d}\log T\sqrt{T}\big), where d~\widetilde{d} is the effective dimension of the kernel space. To our best knowledge, for RL using features or kernels, our results are the first regret bounds that are near-optimal in time TT and dimension dd (or d~\widetilde{d}) and polynomial in the planning horizon HH.

View on arXiv
Comments on this paper