ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2110.04638
17
15

Satisficing Paths and Independent Multi-Agent Reinforcement Learning in Stochastic Games

9 October 2021
Bora Yongacoglu
Gürdal Arslan
S. Yüksel
ArXivPDFHTML
Abstract

In multi-agent reinforcement learning (MARL), independent learners are those that do not observe the actions of other agents in the system. Due to the decentralization of information, it is challenging to design independent learners that drive play to equilibrium. This paper investigates the feasibility of using satisficing dynamics to guide independent learners to approximate equilibrium in stochastic games. For ϵ≥0\epsilon \geq 0ϵ≥0, an ϵ\epsilonϵ-satisficing policy update rule is any rule that instructs the agent to not change its policy when it is ϵ\epsilonϵ-best-responding to the policies of the remaining players; ϵ\epsilonϵ-satisficing paths are defined to be sequences of joint policies obtained when each agent uses some ϵ\epsilonϵ-satisficing policy update rule to select its next policy. We establish structural results on the existence of ϵ\epsilonϵ-satisficing paths into ϵ\epsilonϵ-equilibrium in both symmetric NNN-player games and general stochastic games with two players. We then present an independent learning algorithm for NNN-player symmetric games and give high probability guarantees of convergence to ϵ\epsilonϵ-equilibrium under self-play. This guarantee is made using symmetry alone, leveraging the previously unexploited structure of ϵ\epsilonϵ-satisficing paths.

View on arXiv
Comments on this paper