Driving Reinforcement Learning with Models

Abstract
In this paper we propose a new approach to complement reinforcement learning (RL) with model-based control (in particular, Model Predictive Control - MPC). We introduce an algorithm, the MPC augmented RL (MPRL) that combines RL and MPC in a novel way so that they can augment each other's strengths. We demonstrate the effectiveness of the MPRL by letting it play against the Atari game Pong. For this task, the results highlight how MPRL is able to outperform both RL and MPC when these are used individually.
View on arXivComments on this paper