ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1506.08272
36
500

Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization

27 June 2015
Xiangru Lian
Yijun Huang
Y. Li
Ji Liu
ArXivPDFHTML
Abstract

Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provide theoretical supports, this paper studies two asynchronous parallel implementations of SG: one is on the computer network and the other is on the shared memory system. We establish an ergodic convergence rate O(1/K)O(1/\sqrt{K})O(1/K​) for both algorithms and prove that the linear speedup is achievable if the number of workers is bounded by K\sqrt{K}K​ (KKK is the total number of iterations). Our results generalize and improve existing analysis for convex minimization.

View on arXiv
Comments on this paper