ISL: Optimal Policy Learning With Optimal Exploration-Exploitation
Trade-Off
Maximum entropy reinforcement learning (RL) has received a lot of attention recently. Some of the algorithms within this framework exhibit state of the art performance in many challenging tasks. One of the main advantages of these algorithms is the improvement in exploration, however, they are still inefficient in performing \textit{deep exploration}. The main reason for this deficiency is that traditionally, in the field of RL, the learning rules and deep exploration schemes have been derived separately, with the exploration-exploitation dilemma often addressed through heuristics. In this article we show that both the learning equations and the exploration-exploitation strategy can be derived in tandem as the solution to a well-posed optimization problem whose minimization leads to the optimal value function. Similarly as maximum entropy RL, we do so augmenting the traditional RL objective with a regularization term (albeit not the entropy of the policy). The contribution of this paper is the introduction of a new off-policy algorithm (referred to as the ISL strategy) whose derivation follows this idea. This algorithm is similar to recent work in maximum entropy RL but it is much more effective in performing deep exploration. We show the effectiveness of our method on a range of challenging deep-exploration benchmarks.
View on arXiv