ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1904.10155
104
25
v1v2 (latest)

Learning Feature Sparse Principal Components

23 April 2019
Lai Tian
Feiping Nie
Xuelong Li
ArXiv (abs)PDFHTML
Abstract

Sparse PCA has shown its effectiveness in high dimensional data analysis, while there is still a gap between the computational method and statistical theory. This paper presents algorithms to solve the row-sparsity constrained PCA, named Feature Sparse PCA (FSPCA), which performs feature selection and PCA simultaneously. Existing techniques to solve the FSPCA problem suffer two main drawbacks: (1) most approaches only solve the leading eigenvector and rely on the deflation technique to estimate the leading m eigenspace, which has feature sparsity inconsistence, identifiability, and orthogonality issues; (2) some approaches are heuristics without convergence guarantee. In this paper, we present convergence guaranteed algorithms to directly estimate the leading m eigenspace. In detail, we show for a low rank covariance matrix, the FSPCA problem can be solved globally (Algorithm 1). Then, we propose an algorithm (Algorithm 2) to solve the FSPCA for general covariance by iteratively building a carefully designed low rank proxy covariance. Theoretical analysis gives the convergence guarantee. Experimental results show the promising performance of the new algorithms compared with the state-of-the-art method on both synthetic and real-world datasets.

View on arXiv
Comments on this paper