20
0

Finite-Time Error Analysis of Online Model-Based Q-Learning with a Relaxed Sampling Model

Abstract

Reinforcement learning has witnessed significant advancements, particularly with the emergence of model-based approaches. Among these, QQ-learning has proven to be a powerful algorithm in model-free settings. However, the extension of QQ-learning to a model-based framework remains relatively unexplored. In this paper, we delve into the sample complexity of QQ-learning when integrated with a model-based approach. Through theoretical analyses and empirical evaluations, we seek to elucidate the conditions under which model-based QQ-learning excels in terms of sample efficiency compared to its model-free counterpart.

View on arXiv
Comments on this paper