Learning Closed-Loop Parametric Nash Equilibria of Multi-Agent Collaborative Field Coverage
Multi-agent reinforcement learning is a challenging and active field of research due to the inherent nonstationary property and coupling between agents. A popular approach to modeling the multi-agent interactions underlying the multi-agent RL problem is the Markov Game. There is a special type of Markov Game, termed Markov Potential Game, which allows us to reduce the Markov Game to a single-objective optimal control problem where the objective function is a potential function. In this work, we prove that a multi-agent collaborative field coverage problem, which is found in many engineering applications, can be formulated as a Markov Potential Game, and we can learn a parameterized closed-loop Nash Equilibrium by solving an equivalent single-objective optimal control problem. As a result, our algorithm is 10x faster during training compared to a game-theoretic baseline and converges faster during policy execution.
View on arXiv@article{chen2025_2503.11829, title={ Learning Closed-Loop Parametric Nash Equilibria of Multi-Agent Collaborative Field Coverage }, author={ Jushan Chen and Santiago Paternain }, journal={arXiv preprint arXiv:2503.11829}, year={ 2025 } }