We propose a rank- variant of the classical Frank-Wolfe algorithm to solve convex optimization over a trace-norm ball. Our algorithm replaces the top singular-vector computation (-SVD) in Frank-Wolfe with a top- singular-vector computation (-SVD), which can be done by repeatedly applying -SVD times. Alternatively, our algorithm can be viewed as a rank- restricted version of projected gradient descent. We show that our algorithm has a linear convergence rate when the objective function is smooth and strongly convex, and the optimal solution has rank at most . This improves the convergence rate and the total time complexity of the Frank-Wolfe method and its variants.
View on arXiv