Symmetric Behavior Regularized Policy Optimization
- OffRL

Behavior Regularized Policy Optimization (BRPO) leverages asymmetric (divergence) regularization to mitigate the distribution shift in offline Reinforcement Learning. This paper is the first to study the open question of symmetric regularization. We show that symmetric regularization does not permit an analytic optimal policy , posing a challenge to practical utility of symmetric BRPO. We approximate by the Taylor series of Pearson-Vajda divergences and show that an analytic policy expression exists only when the series is capped at . To compute the solution in a numerically stable manner, we propose to Taylor expand the conditional symmetry term of the symmetric divergence loss, leading to a novel algorithm: Symmetric -Actor Critic (S-AC). S-AC achieves consistently strong results across various D4RL MuJoCo tasks. Additionally, S-AC avoids per-environment failures observed in IQL, SQL, XQL and AWAC, opening up possibilities for more diverse and effective regularization choices for offline RL.
View on arXiv