Near-Optimal Explainable -Means for All Dimensions

Many clustering algorithms are guided by certain cost functions such as the widely-used -means cost. These algorithms divide data points into clusters with often complicated boundaries, creating difficulties in explaining the clustering decision. In a recent work, Dasgupta, Frost, Moshkovitz, and Rashtchian (ICML 2020) introduced explainable clustering, where the cluster boundaries are axis-parallel hyperplanes and the clustering is obtained by applying a decision tree to the data. The central question here is: how much does the explainability constraint increase the value of the cost function? Given -dimensional data points, we show an efficient algorithm that finds an explainable clustering whose -means cost is at most times the minimum cost achievable by a clustering without the explainability constraint, assuming . Taking the minimum of this bound and the bound in independent work by Makarychev-Shan (ICML 2021), Gamlath-Jia-Polak-Svensson (2021), or Esfandiari-Mirrokni-Narayanan (2021), we get an improved bound of , which we show is optimal for every choice of up to a poly-logarithmic factor in . For in particular, we show an bound, improving near-exponentially over the previous best bound of by Laber and Murtinho (ICML 2021).
View on arXiv