ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1711.05762
17
52

Random gradient extrapolation for distributed and stochastic optimization

15 November 2017
Guanghui Lan
Yi Zhou
ArXivPDFHTML
Abstract

In this paper, we consider a class of finite-sum convex optimization problems defined over a distributed multiagent network with mmm agents connected to a central server. In particular, the objective function consists of the average of mmm (≥1\ge 1≥1) smooth components associated with each network agent together with a strongly convex term. Our major contribution is to develop a new randomized incremental gradient algorithm, namely random gradient extrapolation method (RGEM), which does not require any exact gradient evaluation even for the initial point, but can achieve the optimal O(log⁡(1/ϵ)){\cal O}(\log(1/\epsilon))O(log(1/ϵ)) complexity bound in terms of the total number of gradient evaluations of component functions to solve the finite-sum problems. Furthermore, we demonstrate that for stochastic finite-sum optimization problems, RGEM maintains the optimal O(1/ϵ){\cal O}(1/\epsilon)O(1/ϵ) complexity (up to a certain logarithmic factor) in terms of the number of stochastic gradient computations, but attains an O(log⁡(1/ϵ)){\cal O}(\log(1/\epsilon))O(log(1/ϵ)) complexity in terms of communication rounds (each round involves only one agent). It is worth noting that the former bound is independent of the number of agents mmm, while the latter one only linearly depends on mmm or even m\sqrt mm​ for ill-conditioned problems. To the best of our knowledge, this is the first time that these complexity bounds have been obtained for distributed and stochastic optimization problems. Moreover, our algorithms were developed based on a novel dual perspective of Nesterov's accelerated gradient method.

View on arXiv
Comments on this paper