100

The Effect of Q-function Reuse on the Total Regret of Tabular, Model-Free, Reinforcement Learning

Sriram Ganapathi Subramanian
Abstract

Some reinforcement learning methods suffer from high sample complexity causing them to not be practical in real-world situations. QQ-function reuse, a transfer learning method, is one way to reduce the sample complexity of learning, potentially improving usefulness of existing algorithms. Prior work has shown the empirical effectiveness of QQ-function reuse for various environments when applied to model-free algorithms. To the best of our knowledge, there has been no theoretical work showing the regret of QQ-function reuse when applied to the tabular, model-free setting. We aim to bridge the gap between theoretical and empirical work in QQ-function reuse by providing some theoretical insights on the effectiveness of QQ-function reuse when applied to the QQ-learning with UCB-Hoeffding algorithm. Our main contribution is showing that in a specific case if QQ-function reuse is applied to the QQ-learning with UCB-Hoeffding algorithm it has a regret that is independent of the state or action space. We also provide empirical results supporting our theoretical findings.

View on arXiv
Comments on this paper