19
26

Efficient Learning in Non-Stationary Linear Markov Decision Processes

Ahmed Touati
Pascal Vincent
Abstract

We study episodic reinforcement learning in non-stationary linear (a.k.a. low-rank) Markov Decision Processes (MDPs), i.e, both the reward and transition kernel are linear with respect to a given feature map and are allowed to evolve either slowly or abruptly over time. For this problem setting, we propose OPT-WLSVI an optimistic model-free algorithm based on weighted least squares value iteration which uses exponential weights to smoothly forget data that are far in the past. We show that our algorithm, when competing against the best policy at each time, achieves a regret that is upper bounded by O~(d5/4H2Δ1/4K3/4)\widetilde{\mathcal{O}}(d^{5/4}H^2 \Delta^{1/4} K^{3/4}) where dd is the dimension of the feature space, HH is the planning horizon, KK is the number of episodes and Δ\Delta is a suitable measure of non-stationarity of the MDP. Moreover, we point out technical gaps in the study of forgetting strategies in non-stationary linear bandits setting made by previous works and we propose a fix to their regret analysis.

View on arXiv
Comments on this paper