ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.00676
14
1

Last-Iterate Convergence Properties of Regret-Matching Algorithms in Games

1 November 2023
Yang Cai
Gabriele Farina
Julien Grand-Clément
Christian Kroer
Chung-Wei Lee
Haipeng Luo
Weiqiang Zheng
ArXivPDFHTML
Abstract

We study last-iterate convergence properties of algorithms for solving two-player zero-sum games based on Regret Matching+^++ (RM+^++). Despite their widespread use for solving real games, virtually nothing is known about their last-iterate convergence. A major obstacle to analyzing RM-type dynamics is that their regret operators lack Lipschitzness and (pseudo)monotonicity. We start by showing numerically that several variants used in practice, such as RM+^++, predictive RM+^++ and alternating RM+^++, all lack last-iterate convergence guarantees even on a simple 3×33\times 33×3 matrix game. We then prove that recent variants of these algorithms based on a smoothing technique, extragradient RM+^{+}+ and smooth Predictive RM+^++, enjoy asymptotic last-iterate convergence (without a rate), 1/t1/\sqrt{t}1/t​ best-iterate convergence, and when combined with restarting, linear-rate last-iterate convergence. Our analysis builds on a new characterization of the geometric structure of the limit points of our algorithms, marking a significant departure from most of the literature on last-iterate convergence. We believe that our analysis may be of independent interest and offers a fresh perspective for studying last-iterate convergence in algorithms based on non-monotone operators.

View on arXiv
@article{cai2025_2311.00676,
  title={ Last-Iterate Convergence Properties of Regret-Matching Algorithms in Games },
  author={ Yang Cai and Gabriele Farina and Julien Grand-Clément and Christian Kroer and Chung-Wei Lee and Haipeng Luo and Weiqiang Zheng },
  journal={arXiv preprint arXiv:2311.00676},
  year={ 2025 }
}
Comments on this paper