67
0

Learning Equilibria in Matching Games with Bandit Feedback

Main:9 Pages
3 Figures
Bibliography:3 Pages
Appendix:9 Pages
Abstract

We investigate the problem of learning an equilibrium in a generalized two-sided matching market, where agents can adaptively choose their actions based on their assigned matches. Specifically, we consider a setting in which matched agents engage in a zero-sum game with initially unknown payoff matrices, and we explore whether a centralized procedure can learn an equilibrium from bandit feedback. We adopt the solution concept of matching equilibrium, where a pair consisting of a matching m\mathfrak{m} and a set of agent strategies XX forms an equilibrium if no agent has the incentive to deviate from (m,X)(\mathfrak{m}, X). To measure the deviation of a given pair (m,X)(\mathfrak{m}, X) from the equilibrium pair (m,X)(\mathfrak{m}^\star, X^\star), we introduce matching instability that can serve as a regret measure for the corresponding learning problem. We then propose a UCB algorithm in which agents form preferences and select actions based on optimistic estimates of the game payoffs, and prove that it achieves sublinear, instance-independent regret over a time horizon TT.

View on arXiv
@article{athanasopoulos2025_2506.03802,
  title={ Learning Equilibria in Matching Games with Bandit Feedback },
  author={ Andreas Athanasopoulos and Christos Dimitrakakis },
  journal={arXiv preprint arXiv:2506.03802},
  year={ 2025 }
}
Comments on this paper