PCA from noisy, linearly reduced data: the diagonal case

Suppose we observe data of the form or , , where are known diagonal matrices, are noise, and we wish to perform principal component analysis (PCA) on the unobserved signals . The first model arises in missing data problems, where the are binary. The second model captures noisy deconvolution problems, where the are the Fourier transforms of the convolution kernels. It is often reasonable to assume the lie on an unknown low-dimensional linear space; however, because many coordinates can be suppressed by the , this low-dimensional structure can be obscured. We introduce diagonally reduced spiked covariance models to capture this setting. We characterize the behavior of the singular vectors and singular values of the data matrix under high-dimensional asymptotics where such that . Our results have the most general assumptions to date even without diagonal reduction. Using them, we develop optimal eigenvalue shrinkage methods for covariance matrix estimation and optimal singular value shrinkage methods for data denoising. Finally, we characterize the error rates of the empirical Best Linear Predictor (EBLP) denoisers. We show that, perhaps surprisingly, their optimal tuning depends on whether we denoise in-sample or out-of-sample, but the optimally tuned mean squared error is the same in the two cases.
View on arXiv