Pathwise Coordinate optimization
We consider ``one-at-a-time'' coordinate-wise descent algorithms for a class of convex optimization problems. An algorithm of this kind has been proposed for the L_1-penalized regression (lasso) in the lterature, but it seems to hav e been largely ignored. Indeed, it seems that coordinate-wise algorithms are not often used in convex optimization. We show that this algorithm is very competitive with the well known LARS (or hom otopy) procedure in large lasso problems, and that it can be applied to related methods such as t he garotte and elastic net. It turns out that coordinate-wise descent does not work in the ``fu sed lasso'' however, so we derive a generalized algorithm that yields the solution in much less time that a standard convex optimizer. Finally we generalize the procedure to the two-dimensional fused lasso, and demonstrate its performance on some image smoothing problems.
View on arXiv