Efficient Learning in Polyhedral Games via Best Response Oracles

We study online learning and equilibrium computation in games with polyhedral decision sets, a property shared by both normal-form games and extensive-form games (EFGs), when the learning agent is restricted to using a best-response oracle. We show how to achieve constant regret in zero-sum games and regret in general-sum games while using only best-response queries at a given iteration , thus improving over the best prior result, which required queries per iteration. Moreover, our framework yields the first last-iterate convergence guarantees for self-play with best-response oracles in zero-sum games. This convergence occurs at a linear rate, though with a condition-number dependence. We go on to show a best-iterate convergence rate without such a dependence. Our results build on linear-rate convergence results for variants of the Frank-Wolfe (FW) algorithm for strongly convex and smooth minimization problems over polyhedral domains. These FW results depend on a condition number of the polytope, known as facial distance. In order to enable application to settings such as EFGs, we show two broad new results: 1) the facial distance for polytopes in standard form is at least where is the minimum value of a nonzero coordinate of a vertex of the polytope and is the number of tight inequality constraints in the optimal face, and 2) the facial distance for polytopes of the form where , is a nonzero integral matrix, and , is at least . This yields the first such results for several problems such as sequence-form polytopes, flow polytopes, and matching polytopes.
View on arXiv