348
v1v2v3 (latest)

Convergence of a model-free entropy-regularized inverse reinforcement learning algorithm

Abstract

Given a dataset of expert demonstrations, inverse reinforcement learning (IRL) aims to recover a reward for which the expert is optimal. This work proposes a model-free algorithm to solve entropy-regularized IRL problem. In particular, we employ a stochastic gradient descent update for the reward and a stochastic soft policy iteration update for the policy. Assuming access to a generative model, we prove that our algorithm is guaranteed to recover a reward for which the expert is ε\varepsilon-optimal using O(1/ε2)\mathcal{O}(1/\varepsilon^{2}) samples of the Markov decision process (MDP). Furthermore, with O(1/ε4)\mathcal{O}(1/\varepsilon^{4}) samples we prove that the optimal policy corresponding to the recovered reward is ε\varepsilon-close to the expert policy in total variation distance.

View on arXiv
Comments on this paper