27
0

Finite-Time Analysis of Simultaneous Double Q-learning

Abstract

QQ-learning is one of the most fundamental reinforcement learning (RL) algorithms. Despite its widespread success in various applications, it is prone to overestimation bias in the QQ-learning update. To address this issue, double QQ-learning employs two independent QQ-estimators which are randomly selected and updated during the learning process. This paper proposes a modified double QQ-learning, called simultaneous double QQ-learning (SDQ), with its finite-time analysis. SDQ eliminates the need for random selection between the two QQ-estimators, and this modification allows us to analyze double QQ-learning through the lens of a novel switching system framework facilitating efficient finite-time analysis. Empirical studies demonstrate that SDQ converges faster than double QQ-learning while retaining the ability to mitigate the maximization bias. Finally, we derive a finite-time expected error bound for SDQ.

View on arXiv
Comments on this paper