426
v1v2 (latest)

Return Capping: Sample-Efficient CVaR Policy Gradient Optimisation

Main:7 Pages
9 Figures
Bibliography:3 Pages
6 Tables
Appendix:6 Pages
Abstract

When optimising for conditional value at risk (CVaR) using policy gradients (PG), current methods rely on discarding a large proportion of trajectories, resulting in poor sample efficiency. We propose a reformulation of the CVaR optimisation problem by capping the total return of trajectories used in training, rather than simply discarding them, and show that this is equivalent to the original problem if the cap is set appropriately. We show, with empirical results in an number of environments, that this reformulation of the problem results in consistently improved performance compared to baselines. We have made all our code available here:this https URL.

View on arXiv
Comments on this paper