Multi-Batch Experience Replay for Fast Convergence of Continuous Action
Control
- OffRL
Policy gradient methods for direct policy optimization are widely considered to obtain optimal policies in continuous Markov decision process (MDP) environments. However, policy gradient methods require exponentially many samples as the dimension of the action space increases. Thus, off-policy learning with experience replay is proposed to enable the agent to learn by using samples of other policies. Generally, large replay memories are preferred to minimize the sample correlation but large replay memories can yield large bias or variance in importance-sampling-based off-policy learning. In this paper, we propose a multi batch experience replay scheme suitable for off-policy actor-critic-style policy gradient methods such as the proximal policy optimization (PPO) algorithm, which maintains the advantages of experience replay and accelerates learning without causing large bias. To demonstrate the superiority of the proposed method, we apply the proposed experience replay scheme to the PPO algorithm and various continuous control tasks. Numerical results show that our algorithm converges faster and closer to the global optimum than other policy gradient methods.
View on arXiv