ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.10817
14
14

Computationally efficient sparse clustering

21 May 2020
Matthias Löffler
Alexander S. Wein
Afonso S. Bandeira
ArXivPDFHTML
Abstract

We study statistical and computational limits of clustering when the means of the centres are sparse and their dimension is possibly much larger than the sample size. Our theoretical analysis focuses on the model Xi=ziθ+εi, zi∈{−1,1}, εi∼N(0,I)X_i = z_i \theta + \varepsilon_i, ~z_i \in \{-1,1\}, ~\varepsilon_i \thicksim \mathcal{N}(0,I)Xi​=zi​θ+εi​, zi​∈{−1,1}, εi​∼N(0,I), which has two clusters with centres θ\thetaθ and −θ-\theta−θ. We provide a finite sample analysis of a new sparse clustering algorithm based on sparse PCA and show that it achieves the minimax optimal misclustering rate in the regime ∥θ∥→∞\|\theta\| \rightarrow \infty∥θ∥→∞. Our results require the sparsity to grow slower than the square root of the sample size. Using a recent framework for computational lower bounds -- the low-degree likelihood ratio -- we give evidence that this condition is necessary for any polynomial-time clustering algorithm to succeed below the BBP threshold. This complements existing evidence based on reductions and statistical query lower bounds. Compared to these existing results, we cover a wider set of parameter regimes and give a more precise understanding of the runtime required and the misclustering error achievable. Our results imply that a large class of tests based on low-degree polynomials fail to solve even the weak testing task.

View on arXiv
Comments on this paper