ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.03113
81
6

Randomized Block-Coordinate Optimistic Gradient Algorithms for Root-Finding Problems

28 January 2025
Quoc Tran-Dinh
Yang Luo
ArXivPDFHTML
Abstract

In this paper, we develop two new randomized block-coordinate optimistic gradient algorithms to approximate a solution of nonlinear equations in large-scale settings, which are called root-finding problems. Our first algorithm is non-accelerated with constant stepsizes, and achieves O(1/k)\mathcal{O}(1/k)O(1/k) best-iterate convergence rate on E[∥Gxk∥2]\mathbb{E}[ \Vert Gx^k\Vert^2]E[∥Gxk∥2] when the underlying operator GGG is Lipschitz continuous and satisfies a weak Minty solution condition, where E[⋅]\mathbb{E}[\cdot]E[⋅] is the expectation and kkk is the iteration counter. Our second method is a new accelerated randomized block-coordinate optimistic gradient algorithm. We establish both O(1/k2)\mathcal{O}(1/k^2)O(1/k2) and o(1/k2)o(1/k^2)o(1/k2) last-iterate convergence rates on both E[∥Gxk∥2]\mathbb{E}[ \Vert Gx^k\Vert^2]E[∥Gxk∥2] and E[∥xk+1−xk∥2]\mathbb{E}[ \Vert x^{k+1} - x^{k}\Vert^2]E[∥xk+1−xk∥2] for this algorithm under the co-coerciveness of GGG. In addition, we prove that the iterate sequence {xk}\{x^k\}{xk} converges to a solution almost surely, and k∥Gxk∥k\Vert Gx^k\Vertk∥Gxk∥ attains a o(1/k)o(1/k)o(1/k) almost sure convergence rate. Then, we apply our methods to a class of large-scale finite-sum inclusions, which covers prominent applications in machine learning, statistical learning, and network optimization, especially in federated learning. We obtain two new federated learning-type algorithms and their convergence rate guarantees for solving this problem class.

View on arXiv
Comments on this paper