78
85

Principal Coefficients Embedding: Theory and Algorithm

Abstract

Low Rank Representation (LRR) and its extensions seek the lowest rank representation of a given data set by solving a rank-minimization problem, which have attracted a lot of interests. Since the rank operator is nonconvex and discontinuous, LRR uses the Nuclear norm as a convex relaxation and most theoretical studies argue that the Nuclear norm may be the only one surrogate for the rank operator. In this paper, we prove the equivalence between the Frobenius-norm- and the Nuclear-norm-based representation. Specifically, when the data set is error-free, the Frobenius-norm-based representation is exactly the Nuclear-norm-based one; When the data set contains a small amount of additive errors, the Frobenius norm is equivalent to the truncated Nuclear norm. Our theoretical result not only provides a new surrogate (i.e., the Frobenius norm) for the rank-minimization problem, but also gives some novel theoretical insights to understand the success of Frobenius-norm-based methods in subspace clustering and pattern classification. Based on our theoretical results, we propose a robust subspace learning algorithm, i.e., Principal Coefficients Embedding (PCE), which builds a similarity graph using kk largest Frobenius-norm-based coefficients and embeds the graph into a low-dimensional space. Extensive experimental results show that PCE is superior to six feature extraction methods on four popular facial databases with respect to accuracy and robustness to corruptions and disguises.

View on arXiv
Comments on this paper