ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2106.14668
22
2

Evolutionary Dynamics and ΦΦΦ-Regret Minimization in Games

28 June 2021
Georgios Piliouras
Mark Rowland
Shayegan Omidshafiei
Romuald Elie
Daniel Hennes
Jerome T. Connor
K. Tuyls
ArXivPDFHTML
Abstract

Regret has been established as a foundational concept in online learning, and likewise has important applications in the analysis of learning dynamics in games. Regret quantifies the difference between a learner's performance against a baseline in hindsight. It is well-known that regret-minimizing algorithms converge to certain classes of equilibria in games; however, traditional forms of regret used in game theory predominantly consider baselines that permit deviations to deterministic actions or strategies. In this paper, we revisit our understanding of regret from the perspective of deviations over partitions of the full \emph{mixed} strategy space (i.e., probability distributions over pure strategies), under the lens of the previously-established Φ\PhiΦ-regret framework, which provides a continuum of stronger regret measures. Importantly, Φ\PhiΦ-regret enables learning agents to consider deviations from and to mixed strategies, generalizing several existing notions of regret such as external, internal, and swap regret, and thus broadening the insights gained from regret-based analysis of learning algorithms. We prove here that the well-studied evolutionary learning algorithm of replicator dynamics (RD) seamlessly minimizes the strongest possible form of Φ\PhiΦ-regret in generic 2×22 \times 22×2 games, without any modification of the underlying algorithm itself. We subsequently conduct experiments validating our theoretical results in a suite of 144 2×22 \times 22×2 games wherein RD exhibits a diverse set of behaviors. We conclude by providing empirical evidence of Φ\PhiΦ-regret minimization by RD in some larger games, hinting at further opportunity for Φ\PhiΦ-regret based study of such algorithms from both a theoretical and empirical perspective.

View on arXiv
Comments on this paper