Entangled Mean Estimation in High-Dimensions

We study the task of high-dimensional entangled mean estimation in the subset-of-signals model. Specifically, given independent random points in and a parameter such that each is drawn from a Gaussian with mean and unknown covariance, and an unknown -fraction of the points have identity-bounded covariances, the goal is to estimate the common mean . The one-dimensional version of this task has received significant attention in theoretical computer science and statistics over the past decades. Recent work [LY20; CV24] has given near-optimal upper and lower bounds for the one-dimensional setting. On the other hand, our understanding of even the information-theoretic aspects of the multivariate setting has remained limited.In this work, we design a computationally efficient algorithm achieving an information-theoretically near-optimal error. Specifically, we show that the optimal error (up to polylogarithmic factors) is , where the term is the error of the one-dimensional problem and the second term is the sub-Gaussian error rate. Our algorithmic approach employs an iterative refinement strategy, whereby we progressively learn more accurate approximations to . This is achieved via a novel rejection sampling procedure that removes points significantly deviating from , as an attempt to filter out unusually noisy samples. A complication that arises is that rejection sampling introduces bias in the distribution of the remaining points. To address this issue, we perform a careful analysis of the bias, develop an iterative dimension-reduction strategy, and employ a novel subroutine inspired by list-decodable learning that leverages the one-dimensional result.
View on arXiv