ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08456
70
85

From Poincaré Recurrence to Convergence in Imperfect Information Games: Finding Equilibrium via Regularization

19 February 2020
Julien Perolat
Rémi Munos
Jean-Baptiste Lespiau
Shayegan Omidshafiei
Mark Rowland
Pedro A. Ortega
Neil Burch
Thomas W. Anthony
David Balduzzi
Bart De Vylder
Georgios Piliouras
Marc Lanctot
K. Tuyls
ArXiv (abs)PDFHTML
Abstract

In this paper we investigate the Follow the Regularized Leader dynamics in sequential imperfect information games (IIG). We generalize existing results of Poincar\é recurrence from normal-form games to zero-sum two-player imperfect information games and other sequential game settings. We then investigate how adapting the reward (by adding a regularization term) of the game can give strong convergence guarantees in monotone games. We continue by showing how this reward adaptation technique can be leveraged to build algorithms that converge exactly to the Nash equilibrium. Finally, we show how these insights can be directly used to build state-of-the-art model-free algorithms for zero-sum two-player Imperfect Information Games (IIG).

View on arXiv
Comments on this paper