Asynchronous Doubly Stochastic Proximal Optimization with Variance
Reduction
In the big data era, both of the sample size and dimension could be huge at the same time. Asynchronous parallel technology was recently proposed to handle the big data. Specifically, asynchronous stochastic gradient descent algorithms were recently proposed to scale the sample size, and asynchronous stochastic coordinate descent algorithms were proposed to scale the dimension. However, a few existing asynchronous parallel algorithms can scale well in sample size and dimension simultaneously. In this paper, we focus on a composite objective function consists of a smooth convex function f and a separable convex function g. We propose an asynchronous doubly stochastic proximal optimization algorithm with variance reduction (AsyDSPOVR) to scale well with the sample size and dimension simultaneously. We prove that AsyDSPOVR achieves a linear convergence rate when the function f is with the optimal strong convexity property, and a sublinear rate when f is with the general convexity.
View on arXiv