33
1

Entangled Mean Estimation in High-Dimensions

Abstract

We study the task of high-dimensional entangled mean estimation in the subset-of-signals model. Specifically, given NN independent random points x1,,xNx_1,\ldots,x_N in RD\mathbb{R}^D and a parameter α(0,1)\alpha \in (0, 1) such that each xix_i is drawn from a Gaussian with mean μ\mu and unknown covariance, and an unknown α\alpha-fraction of the points have identity-bounded covariances, the goal is to estimate the common mean μ\mu. The one-dimensional version of this task has received significant attention in theoretical computer science and statistics over the past decades. Recent work [LY20; CV24] has given near-optimal upper and lower bounds for the one-dimensional setting. On the other hand, our understanding of even the information-theoretic aspects of the multivariate setting has remained limited.In this work, we design a computationally efficient algorithm achieving an information-theoretically near-optimal error. Specifically, we show that the optimal error (up to polylogarithmic factors) is f(α,N)+D/(αN)f(\alpha,N) + \sqrt{D/(\alpha N)}, where the term f(α,N)f(\alpha,N) is the error of the one-dimensional problem and the second term is the sub-Gaussian error rate. Our algorithmic approach employs an iterative refinement strategy, whereby we progressively learn more accurate approximations μ^\hat \mu to μ\mu. This is achieved via a novel rejection sampling procedure that removes points significantly deviating from μ^\hat \mu, as an attempt to filter out unusually noisy samples. A complication that arises is that rejection sampling introduces bias in the distribution of the remaining points. To address this issue, we perform a careful analysis of the bias, develop an iterative dimension-reduction strategy, and employ a novel subroutine inspired by list-decodable learning that leverages the one-dimensional result.

View on arXiv
Comments on this paper