11
39

Algorithms for Heavy-Tailed Statistics: Regression, Covariance Estimation, and Beyond

Abstract

We study efficient algorithms for linear regression and covariance estimation in the absence of Gaussian assumptions on the underlying distributions of samples, making assumptions instead about only finitely-many moments. We focus on how many samples are needed to do estimation and regression with high accuracy and exponentially-good success probability. For covariance estimation, linear regression, and several other problems, estimators have recently been constructed with sample complexities and rates of error matching what is possible when the underlying distribution is Gaussian, but algorithms for these estimators require exponential time. We narrow the gap between the Gaussian and heavy-tailed settings for polynomial-time estimators with: 1. A polynomial-time estimator which takes nn samples from a random vector XRdX \in R^d with covariance Σ\Sigma and produces Σ^\hat{\Sigma} such that in spectral norm Σ^Σ2O~(d3/4/n)\|\hat{\Sigma} - \Sigma \|_2 \leq \tilde{O}(d^{3/4}/\sqrt{n}) w.p. 12d1-2^{-d}. The information-theoretically optimal error bound is O~(d/n)\tilde{O}(\sqrt{d/n}); previous approaches to polynomial-time algorithms were stuck at O~(d/n)\tilde{O}(d/\sqrt{n}). 2. A polynomial-time algorithm which takes nn samples (Xi,Yi)(X_i,Y_i) where Yi=u,Xi+εiY_i = \langle u,X_i \rangle + \varepsilon_i and produces u^\hat{u} such that the loss uu^2O(d/n)\|u - \hat{u}\|^2 \leq O(d/n) w.p. 12d1-2^{-d} for any nd3/2log(d)O(1)n \geq d^{3/2} \log(d)^{O(1)}. This (information-theoretically optimal) error is achieved by inefficient algorithms for any ndn \gg d; previous polynomial-time algorithms suffer loss Ω(d2/n)\Omega(d^2/n) and require nd2n \gg d^2. Our algorithms use degree-88 sum-of-squares semidefinite programs. We offer preliminary evidence that improving these rates of error in polynomial time is not possible in the median of means framework our algorithms employ.

View on arXiv
Comments on this paper