15
18

Near-Optimal Explainable kk-Means for All Dimensions

Abstract

Many clustering algorithms are guided by certain cost functions such as the widely-used kk-means cost. These algorithms divide data points into clusters with often complicated boundaries, creating difficulties in explaining the clustering decision. In a recent work, Dasgupta, Frost, Moshkovitz, and Rashtchian (ICML 2020) introduced explainable clustering, where the cluster boundaries are axis-parallel hyperplanes and the clustering is obtained by applying a decision tree to the data. The central question here is: how much does the explainability constraint increase the value of the cost function? Given dd-dimensional data points, we show an efficient algorithm that finds an explainable clustering whose kk-means cost is at most k12/dpoly(dlogk)k^{1 - 2/d}\,\mathrm{poly}(d\log k) times the minimum cost achievable by a clustering without the explainability constraint, assuming k,d2k,d\ge 2. Taking the minimum of this bound and the kpolylog(k)k\,\mathrm{polylog} (k) bound in independent work by Makarychev-Shan (ICML 2021), Gamlath-Jia-Polak-Svensson (2021), or Esfandiari-Mirrokni-Narayanan (2021), we get an improved bound of k12/dpolylog(k)k^{1 - 2/d}\,\mathrm{polylog}(k), which we show is optimal for every choice of k,d2k,d\ge 2 up to a poly-logarithmic factor in kk. For d=2d = 2 in particular, we show an O(logkloglogk)O(\log k\log\log k) bound, improving near-exponentially over the previous best bound of O(klogk)O(k\log k) by Laber and Murtinho (ICML 2021).

View on arXiv
Comments on this paper