Scalable Planning and Learning for Multiagent POMDPs

Bayesian methods for reinforcement learning (BRL) allow model uncertainty to be considered explicitly and offer a principled way of dealing with the exploration/exploitation tradeoff. However, for multiagent systems there have been few such approaches, and none of them apply to problems with state uncertainty. In this paper, we fill this gap by proposing a BRL framework for multiagent partially observable Markov decision processes. It considers a team of agents that operates in a centralized fashion, but has uncertainty about both the state and the model of the environment, essentially transforming the learning problem to a planning problem. To deal with the complexity of this planning problem as well as other planning problems with a large number of actions and observations, we propose a novel scalable approach based on sample-based planning and factored value functions that exploits structure present in many multiagent settings. Experimental results show that we are able to provide high quality solutions to large problems even with a large amount of initial model uncertainty. We also show that our approach applies in the (traditional) planning setting, demonstrating significantly more efficient planning in factored settings.
View on arXiv