Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity

Variational inequalities (VIs) are a broad class of optimization problems encompassing machine learning problems ranging from standard convex minimization to more complex scenarios like min-max optimization and computing the equilibria of multi-player games. In convex optimization, strong convexity allows for fast statistical learning rates requiring only stochastic first-order oracle calls to find an -optimal solution, rather than the standard calls. This note provides a simple overview of how one can similarly obtain fast rates for learning VIs that satisfy strong monotonicity, a generalization of strong convexity. Specifically, we demonstrate that standard stability-based generalization arguments for convex minimization extend directly to VIs when the domain admits a small covering, or when the operator is integrable and suboptimality is measured by potential functions; such as when finding equilibria in multi-player games.
View on arXiv@article{zhao2025_2410.20649, title={ Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity }, author={ Eric Zhao and Tatjana Chavdarova and Michael Jordan }, journal={arXiv preprint arXiv:2410.20649}, year={ 2025 } }