23
10

Asynchronous Stochastic Block Coordinate Descent with Variance Reduction

Abstract

Asynchronous parallel implementations for stochastic optimization have received huge successes in theory and practice recently. Asynchronous implementations with lock-free are more efficient than the one with writing or reading lock. In this paper, we focus on a composite objective function consisting of a smooth convex function ff and a block separable convex function, which widely exists in machine learning and computer vision. We propose an asynchronous stochastic block coordinate descent algorithm with the accelerated technology of variance reduction (AsySBCDVR), which are with lock-free in the implementation and analysis. AsySBCDVR is particularly important because it can scale well with the sample size and dimension simultaneously. We prove that AsySBCDVR achieves a linear convergence rate when the function ff is with the optimal strong convexity property, and a sublinear rate when ff is with the general convexity. More importantly, a near-linear speedup on a parallel system with shared memory can be obtained.

View on arXiv
Comments on this paper