Last-Iterate Convergence of Adaptive Riemannian Gradient Descent for Equilibrium Computation
Equilibrium computation on Riemannian manifolds provides a unifying framework for numerous problems in machine learning and data analytics. One of the simplest yet most fundamental methods is Riemannian gradient descent (RGD). While its Euclidean counterpart has been extensively studied, it remains unclear how the manifold curvature affects RGD in game-theoretic settings. This paper addresses this gap by establishing new convergence results for \textit{geodesic strongly monotone} games. Our key result shows that RGD attains last-iterate linear convergence in a \textit{geometry-agnostic} fashion, a key property for applications in machine learning. We extend this guarantee to stochastic and adaptive variants -- SRGD and FARGD -- and establish that: (i) the sample complexity of SRGD is geometry-agnostic and optimal with respect to noise; (ii) FARGD matches the convergence rate of its non-adaptive counterpart up to constant factors, while avoiding reliance on the condition number. Overall, this paper presents the first geometry-agnostic last-iterate convergence analysis for games beyond the Euclidean settings, underscoring the surprising power of RGD -- despite its simplicity -- in solving a wide spectrum of machine learning problems.
View on arXiv