Learning Equilibria in Matching Games with Bandit Feedback

We investigate the problem of learning an equilibrium in a generalized two-sided matching market, where agents can adaptively choose their actions based on their assigned matches. Specifically, we consider a setting in which matched agents engage in a zero-sum game with initially unknown payoff matrices, and we explore whether a centralized procedure can learn an equilibrium from bandit feedback. We adopt the solution concept of matching equilibrium, where a pair consisting of a matching and a set of agent strategies forms an equilibrium if no agent has the incentive to deviate from . To measure the deviation of a given pair from the equilibrium pair , we introduce matching instability that can serve as a regret measure for the corresponding learning problem. We then propose a UCB algorithm in which agents form preferences and select actions based on optimistic estimates of the game payoffs, and prove that it achieves sublinear, instance-independent regret over a time horizon .
View on arXiv@article{athanasopoulos2025_2506.03802, title={ Learning Equilibria in Matching Games with Bandit Feedback }, author={ Andreas Athanasopoulos and Christos Dimitrakakis }, journal={arXiv preprint arXiv:2506.03802}, year={ 2025 } }