33

Is Exploration or Optimization the Problem for Deep Reinforcement Learning?

Main:9 Pages
6 Figures
Bibliography:4 Pages
Abstract

In the era of deep reinforcement learning, making progress is more complex, as the collected experience must be compressed into a deep model for future exploitation and sampling. Many papers have shown that training a deep learning policy under the changing state and action distribution leads to sub-optimal performance, or even collapse. This naturally leads to the concern that even if the community creates improved exploration algorithms or reward objectives, will those improvements fall on the \textit{deaf ears} of optimization difficulties. This work proposes a new \textit{practical} sub-optimality estimator to determine optimization limitations of deep reinforcement learning algorithms. Through experiments across environments and RL algorithms, it is shown that the difference between the best experience generated is 2-3×\times better than the policies' learned performance. This large difference indicates that deep RL methods only exploit half of the good experience they generate.

View on arXiv
Comments on this paper