204

Learning in time-varying games

Abstract

In this paper, we examine the long-run behavior of regret-minimizing agents in time-varying games with continuous action spaces. In its most basic form, (external) regret minimization guarantees that an agent's cumulative payoff is no worse asymptotically than that of the agent's best fixed action in hindsight. Going beyond this worst-case guarantee, we consider games that evolve over time and we examine the asymptotic behavior of a wide class of no-regret policies based on mirror descent. In this general context, we show that the induced sequence of play (a) converges to Nash equilibrium in time-varying games that stabilize in the long run to a strictly monotone limit; and (b) stays asymptotically close to the evolving equilibrium of the sequence of stage games (assuming they are strongly monotone). Our results apply to both gradient-based and payoff-based feedback - i.e., the "bandit" case where players only get to observe the payoffs of their chosen actions.

View on arXiv
Comments on this paper