14
127

Scalable Kernel K-Means Clustering with Nystrom Approximation: Relative-Error Bounds

Abstract

Kernel kk-means clustering can correctly identify and extract a far more varied collection of cluster structures than the linear kk-means clustering algorithm. However, kernel kk-means clustering is computationally expensive when the non-linear feature map is high-dimensional and there are many input points. Kernel approximation, e.g., the Nystr\"om method, has been applied in previous works to approximately solve kernel learning problems when both of the above conditions are present. This work analyzes the application of this paradigm to kernel kk-means clustering, and shows that applying the linear kk-means clustering algorithm to kϵ(1+o(1))\frac{k}{\epsilon} (1 + o(1)) features constructed using a so-called rank-restricted Nystr\"om approximation results in cluster assignments that satisfy a 1+ϵ1 + \epsilon approximation ratio in terms of the kernel kk-means cost function, relative to the guarantee provided by the same algorithm without the use of the Nystr\"om method. As part of the analysis, this work establishes a novel 1+ϵ1 + \epsilon relative-error trace norm guarantee for low-rank approximation using the rank-restricted Nystr\"om approximation. Empirical evaluations on the 8.18.1 million instance MNIST8M dataset demonstrate the scalability and usefulness of kernel kk-means clustering with Nystr\"om approximation. This work argues that spectral clustering using Nystr\"om approximation---a popular and computationally efficient, but theoretically unsound approach to non-linear clustering---should be replaced with the efficient and theoretically sound combination of kernel kk-means clustering with Nystr\"om approximation. The superior performance of the latter approach is empirically verified.

View on arXiv
Comments on this paper