ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1604.03584
  4. Cited By
Asynchronous Stochastic Gradient Descent with Variance Reduction for
  Non-Convex Optimization

Asynchronous Stochastic Gradient Descent with Variance Reduction for Non-Convex Optimization

12 April 2016
Zhouyuan Huo
Heng-Chiao Huang
ArXivPDFHTML

Papers citing "Asynchronous Stochastic Gradient Descent with Variance Reduction for Non-Convex Optimization"

12 / 12 papers shown
Title
Scaling up Stochastic Gradient Descent for Non-convex Optimisation
Scaling up Stochastic Gradient Descent for Non-convex Optimisation
S. Mohamad
H. Alamri
A. Bouchachia
50
3
0
06 Oct 2022
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient
  Method for Distributed Learning in Computing Clusters
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
Zhuqing Liu
Xin Zhang
Jia-Wei Liu
38
1
0
17 Aug 2022
Privacy-Preserving Asynchronous Federated Learning Algorithms for
  Multi-Party Vertically Collaborative Learning
Privacy-Preserving Asynchronous Federated Learning Algorithms for Multi-Party Vertically Collaborative Learning
Bin Gu
An Xu
Zhouyuan Huo
Cheng Deng
Heng-Chiao Huang
FedML
38
28
0
14 Aug 2020
Momentum-based variance-reduced proximal stochastic gradient method for
  composite nonconvex stochastic optimization
Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
Yangyang Xu
Yibo Xu
35
23
0
31 May 2020
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual
  Algorithm for High-Dimensional Data Mining
Straggler-Agnostic and Communication-Efficient Distributed Primal-Dual Algorithm for High-Dimensional Data Mining
Zhouyuan Huo
Heng-Chiao Huang
FedML
19
5
0
09 Oct 2019
Distributed Inexact Successive Convex Approximation ADMM: Analysis-Part
  I
Distributed Inexact Successive Convex Approximation ADMM: Analysis-Part I
Sandeep Kumar
K. Rajawat
Daniel P. Palomar
26
4
0
21 Jul 2019
Double Quantization for Communication-Efficient Distributed Optimization
Double Quantization for Communication-Efficient Distributed Optimization
Yue Yu
Jiaxiang Wu
Longbo Huang
MQ
19
57
0
25 May 2018
Taming Convergence for Asynchronous Stochastic Gradient Descent with
  Unbounded Delay in Non-Convex Learning
Taming Convergence for Asynchronous Stochastic Gradient Descent with Unbounded Delay in Non-Convex Learning
Xin Zhang
Jia-Wei Liu
Zhengyuan Zhu
16
17
0
24 May 2018
Parallel and Distributed Successive Convex Approximation Methods for
  Big-Data Optimization
Parallel and Distributed Successive Convex Approximation Methods for Big-Data Optimization
G. Scutari
Ying Sun
40
61
0
17 May 2018
Zeroth Order Nonconvex Multi-Agent Optimization over Networks
Zeroth Order Nonconvex Multi-Agent Optimization over Networks
Davood Hajinezhad
Mingyi Hong
Alfredo García
21
79
0
27 Oct 2017
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction
Asynchronous Stochastic Block Coordinate Descent with Variance Reduction
Bin Gu
Zhouyuan Huo
Heng-Chiao Huang
28
10
0
29 Oct 2016
Asynchronous Parallel Algorithms for Nonconvex Optimization
Asynchronous Parallel Algorithms for Nonconvex Optimization
Loris Cannelli
F. Facchinei
Vyacheslav Kungurtsev
G. Scutari
17
16
0
17 Jul 2016
1