The cyclic block coordinate descent-type (CBCD-type) methods have shown remarkable computational performance for solving strongly convex minimization problems. Typical applications include many popular statistical machine learning methods such as elastic-net regression, ridge penalized logistic regression, and sparse additive regression. Existing optimization literature has shown that the CBCD-type methods attain iteration complexity of , where is a pre-specified accuracy of the objective value, and is the number of blocks. However, such iteration complexity explicitly depends on , and therefore is at least times worse than those of gradient descent methods. To bridge this theoretical gap, we propose an improved convergence analysis for the CBCD-type methods. In particular, we first show that for a family of quadratic minimization problems, the iteration complexity of the CBCD-type methods matches that of the gradient descent methods in term of dependency on (up to a factor). Thus our complexity bounds are sharper than the existing bounds by at least a factor of . We also provide a lower bound to confirm that our improved complexity bounds are tight (up to a factor) if the largest and smallest eigenvalues of the Hessian matrix do not scale with . Finally, we generalize our analysis to other strongly convex minimization problems beyond quadratic ones
View on arXiv