42
105

Sparse PCA via Covariance Thresholding

Abstract

In sparse principal component analysis we are given noisy observations of a low-rank matrix of dimension n×pn\times p and seek to reconstruct it under additional sparsity assumptions. In particular, we assume here each of the principal components v1,,vr\mathbf{v}_1,\dots,\mathbf{v}_r has at most s0s_0 non-zero entries. We are particularly interested in the high dimensional regime wherein pp is comparable to, or even much larger than nn. In an influential paper, \cite{johnstone2004sparse} introduced a simple algorithm that estimates the support of the principal vectors v1,,vr\mathbf{v}_1,\dots,\mathbf{v}_r by the largest entries in the diagonal of the empirical covariance. This method can be shown to identify the correct support with high probability if s0K1n/logps_0\le K_1\sqrt{n/\log p}, and to fail with high probability if s0K2n/logps_0\ge K_2 \sqrt{n/\log p} for two constants 0<K1,K2<0<K_1,K_2<\infty. Despite a considerable amount of work over the last ten years, no practical algorithm exists with provably better support recovery guarantees. Here we analyze a covariance thresholding algorithm that was recently proposed by \cite{KrauthgamerSPCA}. On the basis of numerical simulations (for the rank-one case), these authors conjectured that covariance thresholding correctly recover the support with high probability for s0Kns_0\le K\sqrt{n} (assuming nn of the same order as pp). We prove this conjecture, and in fact establish a more general guarantee including higher-rank as well as nn much smaller than pp. Recent lower bounds \cite{berthet2013computational, ma2015sum} suggest that no polynomial time algorithm can do significantly better. The key technical component of our analysis develops new bounds on the norm of kernel random matrices, in regimes that were not considered before.

View on arXiv
Comments on this paper