136
v1v2v3 (latest)

Symmetric Behavior Regularized Policy Optimization

Main:10 Pages
12 Figures
Bibliography:4 Pages
10 Tables
Appendix:8 Pages
Abstract

Behavior Regularized Policy Optimization (BRPO) leverages asymmetric (divergence) regularization to mitigate the distribution shift in offline Reinforcement Learning. This paper is the first to study the open question of symmetric regularization. We show that symmetric regularization does not permit an analytic optimal policy π\pi^*, posing a challenge to practical utility of symmetric BRPO. We approximate π\pi^* by the Taylor series of Pearson-Vajda χn\chi^n divergences and show that an analytic policy expression exists only when the series is capped at n=5n=5. To compute the solution in a numerically stable manner, we propose to Taylor expand the conditional symmetry term of the symmetric divergence loss, leading to a novel algorithm: Symmetric ff-Actor Critic (Sff-AC). Sff-AC achieves consistently strong results across various D4RL MuJoCo tasks. Additionally, Sff-AC avoids per-environment failures observed in IQL, SQL, XQL and AWAC, opening up possibilities for more diverse and effective regularization choices for offline RL.

View on arXiv
Comments on this paper