32
11

Differentially Private Exploration in Reinforcement Learning with Linear Representation

Abstract

This paper studies privacy-preserving exploration in Markov Decision Processes (MDPs) with linear representation. We first consider the setting of linear-mixture MDPs (Ayoub et al., 2020) (a.k.a.\ model-based setting) and provide an unified framework for analyzing joint and local differential private (DP) exploration. Through this framework, we prove a O~(K3/4/ϵ)\widetilde{O}(K^{3/4}/\sqrt{\epsilon}) regret bound for (ϵ,δ)(\epsilon,\delta)-local DP exploration and a O~(K/ϵ)\widetilde{O}(\sqrt{K/\epsilon}) regret bound for (ϵ,δ)(\epsilon,\delta)-joint DP. We further study privacy-preserving exploration in linear MDPs (Jin et al., 2020) (a.k.a.\ model-free setting) where we provide a O~(K35/ϵ25)\widetilde{O}\left(K^{\frac{3}{5}}/\epsilon^{\frac{2}{5}}\right) regret bound for (ϵ,δ)(\epsilon,\delta)-joint DP, with a novel algorithm based on low-switching. Finally, we provide insights into the issues of designing local DP algorithms in this model-free setting.

View on arXiv
Comments on this paper