27
61

Linear-Quadratic Mean-Field Reinforcement Learning: Convergence of Policy Gradient Methods

Abstract

We investigate reinforcement learning in the setting of Markov decision processes for a large number of exchangeable agents interacting in a mean field manner. Applications include, for example, the control of a large number of robots communicating through a central unit dispatching the optimal policy computed by maximizing an aggregate reward. An approximate solution is obtained by learning the optimal policy of a generic agent interacting with the statistical distribution of the states and actions of the other agents. We first provide a full analysis this discrete-time mean field control problem. We then rigorously prove the convergence of exact and model-free policy gradient methods in a mean-field linear-quadratic setting and establish bounds on the rates of convergence. We also provide graphical evidence of the convergence based on implementations of our algorithms.

View on arXiv
@article{carmona2025_1910.04295,
  title={ Linear-Quadratic Mean-Field Reinforcement Learning: Convergence of Policy Gradient Methods },
  author={ René Carmona and Mathieu Laurière and Zongjun Tan },
  journal={arXiv preprint arXiv:1910.04295},
  year={ 2025 }
}
Comments on this paper