10
21

Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean Estimation

Abstract

We study the problem of list-decodable mean estimation, where an adversary can corrupt a majority of the dataset. Specifically, we are given a set TT of nn points in Rd\mathbb{R}^d and a parameter 0<α<120< \alpha <\frac 1 2 such that an α\alpha-fraction of the points in TT are i.i.d. samples from a well-behaved distribution D\mathcal{D} and the remaining (1α)(1-\alpha)-fraction are arbitrary. The goal is to output a small list of vectors, at least one of which is close to the mean of D\mathcal{D}. We develop new algorithms for list-decodable mean estimation, achieving nearly-optimal statistical guarantees, with running time O(n1+ϵ0d)O(n^{1 + \epsilon_0} d), for any fixed ϵ0>0\epsilon_0 > 0. All prior algorithms for this problem had additional polynomial factors in 1α\frac 1 \alpha. We leverage this result, together with additional techniques, to obtain the first almost-linear time algorithms for clustering mixtures of kk separated well-behaved distributions, nearly-matching the statistical guarantees of spectral methods. Prior clustering algorithms inherently relied on an application of kk-PCA, thereby incurring runtimes of Ω(ndk)\Omega(n d k). This marks the first runtime improvement for this basic statistical problem in nearly two decades. The starting point of our approach is a novel and simpler near-linear time robust mean estimation algorithm in the α1\alpha \to 1 regime, based on a one-shot matrix multiplicative weights-inspired potential decrease. We crucially leverage this new algorithmic framework in the context of the iterative multi-filtering technique of Diakonikolas et al. '18, '20, providing a method to simultaneously cluster and downsample points using one-dimensional projections -- thus, bypassing the kk-PCA subroutines required by prior algorithms.

View on arXiv
Comments on this paper