217
v1v2 (latest)

Model-free Posterior Sampling via Learning Rate Randomization

Neural Information Processing Systems (NeurIPS), 2023
Main:10 Pages
7 Figures
Bibliography:5 Pages
3 Tables
Appendix:42 Pages
Abstract

In this paper, we introduce Randomized Q-learning (RandQL), a novel randomized model-free algorithm for regret minimization in episodic Markov Decision Processes (MDPs). To the best of our knowledge, RandQL is the first tractable model-free posterior sampling-based algorithm. We analyze the performance of RandQL in both tabular and non-tabular metric space settings. In tabular MDPs, RandQL achieves a regret bound of order O~(H5SAT)\widetilde{O}(\sqrt{H^{5}SAT}), where HH is the planning horizon, SS is the number of states, AA is the number of actions, and TT is the number of episodes. For a metric state-action space, RandQL enjoys a regret bound of order O~(H5/2T(dz+1)/(dz+2))\widetilde{O}(H^{5/2} T^{(d_z+1)/(d_z+2)}), where dzd_z denotes the zooming dimension. Notably, RandQL achieves optimistic exploration without using bonuses, relying instead on a novel idea of learning rate randomization. Our empirical study shows that RandQL outperforms existing approaches on baseline exploration environments.

View on arXiv
Comments on this paper