105

A Communication-Efficient Decentralized Actor-Critic Algorithm

Main:12 Pages
7 Figures
Bibliography:3 Pages
1 Tables
Abstract

In this paper, we study the problem of reinforcement learning in multi-agent systems where communication among agents is limited. We develop a decentralized actor-critic learning framework in which each agent performs several local updates of its policy and value function, where the latter is approximated by a multi-layer neural network, before exchanging information with its neighbors. This local training strategy substantially reduces the communication burden while maintaining coordination across the network. We establish finite-time convergence analysis for the algorithm under Markov-sampling. Specifically, to attain the ε\varepsilon-accurate stationary point, the sample complexity is of order O(ε3)\mathcal{O}(\varepsilon^{-3}) and the communication complexity is of order O(ε1τ1)\mathcal{O}(\varepsilon^{-1}\tau^{-1}), where tau denotes the number of local training steps. We also show how the final error bound depends on the neural network's approximation quality. Numerical experiments in a cooperative control setting illustrate and validate the theoretical findings.

View on arXiv
Comments on this paper