Scalable Primal-Dual Actor-Critic Method for Safe Multi-Agent RL with General Utilities

We investigate safe multi-agent reinforcement learning, where agents seek to collectively maximize an aggregate sum of local objectives while satisfying their own safety constraints. The objective and constraints are described by {\it general utilities}, i.e., nonlinear functions of the long-term state-action occupancy measure, which encompass broader decision-making goals such as risk, exploration, or imitations. The exponential growth of the state-action space size with the number of agents presents challenges for global observability, further exacerbated by the global coupling arising from agents' safety constraints. To tackle this issue, we propose a primal-dual method utilizing shadow reward and -hop neighbor truncation under a form of correlation decay property, where is the communication radius. In the exact setting, our algorithm converges to a first-order stationary point (FOSP) at the rate of . In the sample-based setting, we demonstrate that, with high probability, our algorithm requires samples to achieve an -FOSP with an approximation error of , where . Finally, we demonstrate the effectiveness of our model through extensive numerical experiments.
View on arXiv