11
8

Elastic Step DQN: A novel multi-step algorithm to alleviate overestimation in Deep QNetworks

Abstract

Deep Q-Networks algorithm (DQN) was the first reinforcement learning algorithm using deep neural network to successfully surpass human level performance in a number of Atari learning environments. However, divergent and unstable behaviour have been long standing issues in DQNs. The unstable behaviour is often characterised by overestimation in the QQ-values, commonly referred to as the overestimation bias. To address the overestimation bias and the divergent behaviour, a number of heuristic extensions have been proposed. Notably, multi-step updates have been shown to drastically reduce unstable behaviour while improving agent's training performance. However, agents are often highly sensitive to the selection of the multi-step update horizon (nn), and our empirical experiments show that a poorly chosen static value for nn can in many cases lead to worse performance than single-step DQN. Inspired by the success of nn-step DQN and the effects that multi-step updates have on overestimation bias, this paper proposes a new algorithm that we call `Elastic Step DQN' (ES-DQN). It dynamically varies the step size horizon in multi-step updates based on the similarity of states visited. Our empirical evaluation shows that ES-DQN out-performs nn-step with fixed nn updates, Double DQN and Average DQN in several OpenAI Gym environments while at the same time alleviating the overestimation bias.

View on arXiv
Comments on this paper