Coresets are efficient representations of datasets such that models trained on a coreset are provably competitive with models trained on the original dataset. As such, they have been successfully used to scale up clustering models such as K-Means and Gaussian mixture models to massive datasets. However, until now, the algorithms and corresponding theory were usually specific to each clustering problem. We propose a single, practical algorithm to construct strong coresets for a large class of hard and soft clustering problems based on Bregman divergences. This class includes hard clustering with popular distortion measures such as the Squared Euclidean distance, the Mahalanobis distance, KL-divergence, Itakura-Saito distance and relative entropy. The corresponding soft clustering problems are directly related to popular mixture models due to a dual relationship between Bregman divergences and Exponential family distributions. Our results recover existing coreset constructions for K-Means and Gaussian mixture models and imply polynomial time approximations schemes for various hard clustering problems.
View on arXiv