115

Local Search Algorithms for Rank-Constrained Convex Optimization

International Conference on Learning Representations (ICLR), 2021
Abstract

We propose greedy and local search algorithms for rank-constrained convex optimization, namely solving minrank(A)rR(A)\underset{\mathrm{rank}(A)\leq r^*}{\min}\, R(A) given a convex function R:Rm×nRR:\mathbb{R}^{m\times n}\rightarrow \mathbb{R} and a parameter rr^*. These algorithms consist of repeating two steps: (a) adding a new rank-1 matrix to AA and (b) enforcing the rank constraint on AA. We refine and improve the theoretical analysis of Shalev-Shwartz et al. (2011), and show that if the rank-restricted condition number of RR is κ\kappa, a solution AA with rank O(rmin{κlogR(0)R(A)ϵ,κ2})O(r^*\cdot \min\{\kappa \log \frac{R(\mathbf{0})-R(A^*)}{\epsilon}, \kappa^2\}) and R(A)R(A)+ϵR(A) \leq R(A^*) + \epsilon can be recovered, where AA^* is the optimal solution. This significantly generalizes associated results on sparse convex optimization, as well as rank-constrained convex optimization for smooth functions. We then introduce new practical variants of these algorithms that have superior runtime and recover better solutions in practice. We demonstrate the versatility of these methods on a wide range of applications involving matrix completion and robust principal component analysis.

View on arXiv
Comments on this paper