The local convexity of solving systems of quadratic equations
This paper considers the recovery of a rank positive semidefinite matrix from scalar measurements of the form (i.e., quadratic measurements of ). Such problems arise in a variety of applications, including covariance sketching of high-dimensional data streams, quadratic regression, quantum state tomography, etc. A natural approach to this problem is to minimize the loss function ; this is non-convex in the matrix , but methods like gradient descent are simple and easy to implement (as compared to convex approaches to this problem). In this paper we show that, once we have samples from isotropic gaussian , with high probability (a) this function admits a dimension-independent region of local strong convexity, and (b) a simple spectral initialization will land within the region of convexity with high probability. Together, this implies that gradient descent with initialization (but no re-sampling) will converge linearly to the correct . We believe that our general technique (local convexity reachable by spectral initialization) should prove applicable to a broader class of nonconvex optimization problems.
View on arXiv