8
6

Learning Stationary Nash Equilibrium Policies in nn-Player Stochastic Games with Independent Chains

Abstract

We consider a subclass of nn-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players' internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each other's states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is interactable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido-Isoda distance to the set of ϵ\epsilon-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an ϵ\epsilon-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning ϵ\epsilon-NE policies using numerical experiments for energy management in smart grids.

View on arXiv
Comments on this paper