Evolving Generalizable Actor-Critic Algorithms
Deploying Reinforcement Learning (RL) agents in the real world requires designing and tuning algorithms for problem-specific objectives such as performance, robustness, or stability. These objectives can frequently change, which will then necessitate further painstaking design and tuning. This paper presents MetaPG, an evolutionary method for designing new loss functions for actor-critic RL algorithms that optimize for different objectives. In particular, we focus on the objectives of final performance in training regime, policy robustness to unseen environment configurations, and training curve stability over random seeds. We initialize our algorithm population from Soft Actor-Critic (SAC) and optimize for these objectives over a set of continuous control tasks from the Real-World RL Benchmark Suite. We find that our method evolves algorithms that, using a single environment during evolution, improve upon SAC's performance and generalizability by 3% and 17%, respectively, and reduce instability up to 65% in that same environment. Then, we scale up to more complex environments from the Brax physics simulator and replicate conditions that can be encountered in practical settings (such as different friction coefficients). MetaPG evolves algorithms that can obtain 9% better policy robustness within the same meta-training environment without loss of performance and robustness when doing cross-domain evaluations in other Brax environments. Lastly, we analyze the structure of the best algorithms in the population and interpret the specific elements that help the algorithm optimize for a certain objective, such as regularizing the critic loss.
View on arXiv