363
v1v2v3 (latest)

Learning in Stackelberg Games with Non-myopic Agents

ACM Conference on Economics and Computation (EC), 2022
Main:24 Pages
9 Figures
Bibliography:6 Pages
1 Tables
Appendix:27 Pages
Abstract

We study Stackelberg games where a principal repeatedly interacts with a non-myopic long-lived agent, without knowing the agent's payoff function. Although learning in Stackelberg games is well-understood when the agent is myopic, dealing with non-myopic agents poses additional complications. In particular, non-myopic agents may strategize and select actions that are inferior in the present in order to mislead the principal's learning algorithm and obtain better outcomes in the future.We provide a general framework that reduces learning in presence of non-myopic agents to robust bandit optimization in the presence of myopic agents. Through the design and analysis of minimally reactive bandit algorithms, our reduction trades off the statistical efficiency of the principal's learning algorithm against its effectiveness in inducing near-best-responses. We apply this framework to Stackelberg security games (SSGs), pricing with unknown demand curve, general finite Stackelberg games, and strategic classification. In each setting, we characterize the type and impact of misspecifications present in near-best responses and develop a learning algorithm robust to such misspecifications.On the way, we improve the state-of-the-art query complexity of learning in SSGs with nn targets from O(n3)O(n^3) to a near-optimal O~(n)\widetilde{O}(n) by uncovering a fundamental structural property of these games. The latter result is of independent interest beyond learning with non-myopic agents.

View on arXiv
Comments on this paper