659

Accelerated Randomized Block-Coordinate Algorithms for Co-coercive Equations and Applications

Mathematics of Operations Research (MOR), 2023
Abstract

In this paper, we develop an accelerated randomized block-coordinate algorithm to approximate a solution of a co-coercive equation. Such an equation plays a central role in optimization and related fields and covers many mathematical models as special cases, including convex optimization, convex-concave minimax, and variational inequality problems. Our algorithm relies on a recent Nesterov's accelerated interpretation of the Halpern fixed-point iteration in [48]. We establish that the new algorithm achieves O(1/k2)\mathcal{O}(1/k^2)-convergence rate on E[Gxk2]\mathbb{E}[\Vert Gx^k\Vert^2] through the last-iterate, where GG is the underlying co-coercive operator, E[]\mathbb{E}[\cdot] is the expectation, and kk is the iteration counter. This rate is significantly faster than O(1/k)\mathcal{O}(1/k) rates in standard forward or gradient-based methods from the literature. We also prove o(1/k2)o(1/k^2) rates on both E[Gxk2]\mathbb{E}[\Vert Gx^k\Vert^2] and E[xk+1xk2]\mathbb{E}[\Vert x^{k+1} - x^{k}\Vert^2]. Next, we apply our method to derive two accelerated randomized block coordinate variants of the forward-backward splitting and Douglas-Rachford splitting schemes, respectively for solving a monotone inclusion involving the sum of two operators. As a byproduct, these variants also have faster convergence rates than their non-accelerated counterparts. Finally, we apply our scheme to a finite-sum monotone inclusion that has various applications in machine learning and statistical learning, including federated learning. As a result, we obtain a novel federated learning-type algorithm with fast and provable convergence rates.

View on arXiv
Comments on this paper