546
v1v2v3 (latest)

On the Approximation of Cooperative Heterogeneous Multi-Agent Reinforcement Learning (MARL) using Mean Field Control (MFC)

Journal of machine learning research (JMLR), 2021
Abstract

Mean field control (MFC) is an effective way to mitigate the curse of dimensionality of cooperative multi-agent reinforcement learning (MARL) problems. This work considers a collection of NpopN_{\mathrm{pop}} heterogeneous agents that can be segregated into KK classes such that the kk-th class contains NkN_k homogeneous agents. We aim to prove approximation guarantees of the MARL problem for this heterogeneous system by its corresponding MFC problem. We consider three scenarios where the reward and transition dynamics of all agents are respectively taken to be functions of (1)(1) joint state and action distributions across all classes, (2)(2) individual distributions of each class, and (3)(3) marginal distributions of the entire population. We show that, in these cases, the KK-class MARL problem can be approximated by MFC with errors given as e1=O(X+UNpopkNk)e_1=\mathcal{O}(\frac{\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}}{N_{\mathrm{pop}}}\sum_{k}\sqrt{N_k}), e2=O([X+U]k1Nk)e_2=\mathcal{O}(\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\sum_{k}\frac{1}{\sqrt{N_k}}) and e3=O([X+U][ANpopk[K]Nk+BNpop])e_3=\mathcal{O}\left(\left[\sqrt{|\mathcal{X}|}+\sqrt{|\mathcal{U}|}\right]\left[\frac{A}{N_{\mathrm{pop}}}\sum_{k\in[K]}\sqrt{N_k}+\frac{B}{\sqrt{N_{\mathrm{pop}}}}\right]\right), respectively, where A,BA, B are some constants and X,U|\mathcal{X}|,|\mathcal{U}| are the sizes of state and action spaces of each agent. Finally, we design a Natural Policy Gradient (NPG) based algorithm that, in the three cases stated above, can converge to an optimal MARL policy within O(ej)\mathcal{O}(e_j) error with a sample complexity of O(ej3)\mathcal{O}(e_j^{-3}), j{1,2,3}j\in\{1,2,3\}, respectively.

View on arXiv
Comments on this paper