ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01976
31
0

Learning a Game by Paying the Agents

3 March 2025
B. Zhang
Tao Lin
Yiling Chen
T. Sandholm
ArXivPDFHTML
Abstract

We study the problem of learning the utility functions of agents in a normal-form game by observing the agents play the game repeatedly. Differing from most prior literature, we introduce a principal with the power to observe the agents playing the game, send the agents signals, and send the agents payments as a function of their actions. Under reasonable behavioral models for the agents such as iterated dominated action removal or a no-regret assumption, we show that the principal can, using a number of rounds polynomial in the size of the game, learn the utility functions of all agents to any desirable precision ε>0\varepsilon > 0ε>0. We also show lower bounds in both models, which nearly match the upper bounds in the former model and also strictly separate the two models: the principal can learn strictly faster in the iterated dominance model. Finally, we discuss implications for the problem of steering agents to a desired equilibrium: in particular, we introduce, using our utility-learning algorithm as a subroutine, the first algorithm for steering learning agents without prior knowledge of their utilities.

View on arXiv
@article{zhang2025_2503.01976,
  title={ Learning a Game by Paying the Agents },
  author={ Brian Hu Zhang and Tao Lin and Yiling Chen and Tuomas Sandholm },
  journal={arXiv preprint arXiv:2503.01976},
  year={ 2025 }
}
Comments on this paper