Fast Saddle-Point Algorithm for Generalized Dantzig Selector and FDR Control with the Ordered -Norm

In this paper we propose a primal-dual proximal extragradient algorithm to solve the generalized Dantzig selector (GDS) estimation, based on a new convex-concave saddle-point (SP) formulation of the GDS and a simple gradient extrapolation technique. Our reformulation makes it possible to adapt recent developments in saddle-point optimization, to achieve the optimal rate of convergence. Compared to the optimal non-SP algorithms, ours do not require specification of sensitive parameters that affect algorithm performance or solution quality. We also provide a new analysis showing a possibility of acceleration in special cases even without strong convexity or strong smoothness. As an application, we propose a GDS equipped with the ordered -norm, showing its false discovery rate control properties in variable selection. Algorithm performance is compared between ours and other alternatives, including the linearized ADMM, Nesterov's smoothing, Nemirovski's mirror-prox, and the accelerated hybrid proximal extragradient techniques.
View on arXiv