19
28

Sample-Efficient Reinforcement Learning with loglog(T) Switching Cost

Abstract

We study the problem of reinforcement learning (RL) with low (policy) switching cost - a problem well-motivated by real-life RL applications in which deployments of new policies are costly and the number of policy updates must be low. In this paper, we propose a new algorithm based on stage-wise exploration and adaptive policy elimination that achieves a regret of O~(H4S2AT)\widetilde{O}(\sqrt{H^4S^2AT}) while requiring a switching cost of O(HSAloglogT)O(HSA \log\log T). This is an exponential improvement over the best-known switching cost O(H2SAlogT)O(H^2SA\log T) among existing methods with O~(poly(H,S,A)T)\widetilde{O}(\mathrm{poly}(H,S,A)\sqrt{T}) regret. In the above, S,AS,A denotes the number of states and actions in an HH-horizon episodic Markov Decision Process model with unknown transitions, and TT is the number of steps. As a byproduct of our new techniques, we also derive a reward-free exploration algorithm with a switching cost of O(HSA)O(HSA). Furthermore, we prove a pair of information-theoretical lower bounds which say that (1) Any no-regret algorithm must have a switching cost of Ω(HSA)\Omega(HSA); (2) Any O~(T)\widetilde{O}(\sqrt{T}) regret algorithm must incur a switching cost of Ω(HSAloglogT)\Omega(HSA\log\log T). Both our algorithms are thus optimal in their switching costs.

View on arXiv
Comments on this paper