97
134

An Out-of-sample Extension of Sparse Subspace Clustering and Low Rank Representation for Clustering Large Scale Data Sets

Abstract

In this paper, we propose a general framework to address two problems in Sparse Subspace Clustering (SSC) and Low Rank Representation (LRR), i.e., scalability issue and out-of-sample problem. SSC and LRR are two recently proposed subspace clustering algorithms which construct a similarity graph for spectral clustering by using the sparsest and the lowest-rank coefficients, respectively. SSC and LRR have achieved state-of-the-art results in data clustering. However, their time complexities are very high so that it is inefficient to apply them to large scale data set. Moreover, SSC and LRR cannot cope with out-of-sample data that are not used to construct the similarity graph. For each new datum, they must recalculate the sparsest/lowest-rank coefficients and membership assignment matrix of the whole data set. This makes SSC and LRR not competitive to fast online learning algorithm. To overcome these problems, we propose a simple but effective method which makes SSC and LRR feasible to grouping new data and large scale data. The solution of our method adopts a "sampling, clustering, coding, and classifying" strategy. Specifically, we split the data into two parts, in-sample data and out-of-sample data, where out-of-sample data are located at the subspaces spanned by in-sample data; and obtain the cluster membership of in-sample data; after that, assign each out-of-sample datum to the nearest subspace that produces the minimal reconstruction error. Both theoretical analysis and experimental results show the efficacy of our methods.

View on arXiv
Comments on this paper