38
0

Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity

Abstract

Variational inequalities (VIs) are a broad class of optimization problems encompassing machine learning problems ranging from standard convex minimization to more complex scenarios like min-max optimization and computing the equilibria of multi-player games. In convex optimization, strong convexity allows for fast statistical learning rates requiring only Θ(1/ϵ)\Theta(1/\epsilon) stochastic first-order oracle calls to find an ϵ\epsilon-optimal solution, rather than the standard Θ(1/ϵ2)\Theta(1/\epsilon^2) calls. This note provides a simple overview of how one can similarly obtain fast Θ(1/ϵ)\Theta(1/\epsilon) rates for learning VIs that satisfy strong monotonicity, a generalization of strong convexity. Specifically, we demonstrate that standard stability-based generalization arguments for convex minimization extend directly to VIs when the domain admits a small covering, or when the operator is integrable and suboptimality is measured by potential functions; such as when finding equilibria in multi-player games.

View on arXiv
@article{zhao2025_2410.20649,
  title={ Learning Variational Inequalities from Data: Fast Generalization Rates under Strong Monotonicity },
  author={ Eric Zhao and Tatjana Chavdarova and Michael Jordan },
  journal={arXiv preprint arXiv:2410.20649},
  year={ 2025 }
}
Comments on this paper