26
12

Consistent Estimation for PCA and Sparse Regression with Oblivious Outliers

Abstract

We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an α\alpha fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size n(klogd)/α2n\gtrsim (k\log d)/\alpha^2 and optimal error rate O((klogd)/(nα2))O(\sqrt{(k\log d)/(n\cdot \alpha^2)}) where nn is the number of observations, dd is the number of dimensions and kk is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers α\alpha is o(1/loglogn)o(1/\log \log n), even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by dÓrsi et al. [dNS21]. In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in nn (e.g., Gaussian with variance 1/n21/n^2). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the 1\ell_1 norm or the nuclear norm, and extend dÓrsi et al.'s approach [dNS21] in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems.

View on arXiv
Comments on this paper