ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1411.4419
72
85
v1v2v3v4v5 (latest)

Automatic Subspace Learning via Principal Coefficients Embedding

17 November 2014
Xi Peng
Jiwen Lu
Zhang Yi
Zhang Yi
ArXiv (abs)PDFHTML
Abstract

In this paper, we address two problems in unsupervised subspace learning: 1) how to automatically identify the feature dimension of the learned subspace, and 2) how to learn the underlying subspace in the presence of gross corruptions such as Gaussian noise. We show that these two problems are two sides of one coin, i.e. they can be solved by removing possible errors from training data D∈\mathdsRm×n\mathbf{D}\in \mathds{R}^{m\times n}D∈\mathdsRm×n. To achieve this, we propose a new method (called Principal Coefficients Embedding, PCE) that can simultaneously learn a clean data set D0∈\mathdsRm×n\mathbf{D}_{0}\in \mathds{R}^{m\times n}D0​∈\mathdsRm×n and a linear representation (denoted by C\mathbf{C}C) from D\mathbf{D}D. By embedding C\mathbf{C}C into a kkk-dimensional space, PCE obtains a projection matrix that preserves some desirable properties of inputs, where k≪mk\ll mk≪m is exactly the rank of C\mathbf{C}C. PCE has three advantages: 1) it can automatically determine the feature dimension even though data are sampled from a union of multiple linear subspaces; 2) it is robust to various noises and real disguises; 3) it has a closed-form solution and can be calculated very fast. Extensive experimental results show the superiority of PCE on a range of databases with respect to classification accuracy, robustness and efficiency.

View on arXiv
Comments on this paper