257

Prosocial learning agents solve generalized Stag Hunts better than selfish ones

Abstract

There is much interest in applying reinforcement learning methods to multi-agent systems. A popular way to do so is the method of reactive training -- ie. treating other agents as if they are a stationary part of the learner's environment. Dyads of such learners, if they converge, will converge to Nash equilibria of the game. However, there is an important game theoretic issue here: positive-sum games can have multiple equilibria which differ in their payoffs. We show that even in simple coordination games reactive reinforcement learning agents will often coordinate on equilibria with suboptimal payoffs for both agents. We also show that receiving utility from rewards other agents receive - ie. having prosocial preferences - leads agents to converging to better equilibria in a class of generalized Stag Hunt games. We show this analytically for matrix games and experimentally for more complex Markov versions. Importantly, this is true even if only one of the agents has social preferences. This implies that even if an agent designer only controls a single agent out of a dyad and only cares about their agent's payoff, it can still be better for the designer to make the agent prosocial rather than selfish.

View on arXiv
Comments on this paper