262

Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost

International Conference on Machine Learning (ICML), 2022
Abstract

We study the problem of reinforcement learning (RL) with low (policy) switching cost - a problem well-motivated by real-life RL applications in which deployments of new policies are costly and the number of policy updates must be low. In this paper, we propose a new algorithm based on stage-wise exploration and adaptive policy elimination that achieves a regret of O~(H4S2AT)\widetilde{O}(\sqrt{H^4S^2AT}) while requiring a switching cost of O(HSAloglogT)O(HSA \log\log T). This is an exponential improvement over the best-known switching cost O(H2SAlogT)O(H^2SA\log T) among existing methods with O~(poly(H,S,A)T)\widetilde{O}(\mathrm{poly}(H,S,A)\sqrt{T}) regret. In the above, S,AS,A denotes the number of states and actions in an HH-horizon episodic Markov Decision Process model with unknown transitions, and TT is the number of steps. We also prove an information-theoretical lower bound which says that a switching cost of Ω(HSA)\Omega(HSA) is required for any no-regret algorithm. As a byproduct, our new algorithmic techniques allow us to derive a \emph{reward-free} exploration algorithm with an optimal switching cost of O(HSA)O(HSA).

View on arXiv
Comments on this paper