Federated Reinforcement Learning with Environment Heterogeneity

We study a Federated Reinforcement Learning (FedRL) problem in which agents collaboratively learn a single policy without sharing the trajectories they collected during agent-environment interaction. We stress the constraint of environment heterogeneity, which means environments corresponding to these agents have different state transitions. To obtain a value function or a policy function which optimizes the overall performance in all environments, we propose two federated RL algorithms, \texttt{QAvg} and \texttt{PAvg}. We theoretically prove that these algorithms converge to suboptimal solutions, while such suboptimality depends on how heterogeneous these environments are. Moreover, we propose a heuristic that achieves personalization by embedding the environments into vectors. The personalization heuristic not only improves the training but also allows for better generalization to new environments.
View on arXiv