54
44

Kernel clustering: density biases and solutions

Abstract

Clustering is widely used in data analysis where kernel methods are particularly popular due to their generality and discriminating power. However, kernel clustering has a practically significant bias to small dense clusters, e.g. empirically observed in (Shi and Malik, TPAMI'00). Its causes have never been analyzed and understood theoretically, even though many attempts were made to improve the results. We provide conditions and formally prove this bias in kernel clustering. Previously, (Breiman, ML'96) proved a bias to histogram mode isolation in discrete Gini criterion for decision tree learning. We found that kernel clustering reduces to a continuous generalization of Gini criterion for a common class of kernels where we prove a bias to density mode isolation and call it Breiman's bias. These theoretical findings suggest that a principal solution for the bias should directly address data density inhomogeneity. In particular, we show that density equalization can be implicitly achieved using either locally adaptive weights or a general class of Riemannian (geodesic) kernels. Our density equalization principle unifies many popular kernel clustering criteria including normalized cut, which we show has a bias to sparse subsets inversely related to Breiman's bias. Our synthetic and real data experiments illustrate these density biases and proposed solutions. We anticipate that theoretical understanding of kernel clustering limitations and their principled solutions will be important for a broad spectrum of data analysis applications across the disciplines.

View on arXiv
Comments on this paper