ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.08670
18
0

sparseGeoHOPCA: A Geometric Solution to Sparse Higher-Order PCA Without Covariance Estimation

10 June 2025
Renjie Xu
Chong Wu
Maolin Che
Zhuoheng Ran
Yimin Wei
Hong Yan
ArXiv (abs)PDFHTML
Abstract

We propose sparseGeoHOPCA, a novel framework for sparse higher-order principal component analysis (SHOPCA) that introduces a geometric perspective to high-dimensional tensor decomposition. By unfolding the input tensor along each mode and reformulating the resulting subproblems as structured binary linear optimization problems, our method transforms the original nonconvex sparse objective into a tractable geometric form. This eliminates the need for explicit covariance estimation and iterative deflation, enabling significant gains in both computational efficiency and interpretability, particularly in high-dimensional and unbalanced data scenarios. We theoretically establish the equivalence between the geometric subproblems and the original SHOPCA formulation, and derive worst-case approximation error bounds based on classical PCA residuals, providing data-dependent performance guarantees. The proposed algorithm achieves a total computational complexity of O(∑n=1N(kn3+Jnkn2))O\left(\sum_{n=1}^{N} (k_n^3 + J_n k_n^2)\right)O(∑n=1N​(kn3​+Jn​kn2​)), which scales linearly with tensor size. Extensive experiments demonstrate that sparseGeoHOPCA accurately recovers sparse supports in synthetic settings, preserves classification performance under 10×\times× compression, and achieves high-quality image reconstruction on ImageNet, highlighting its robustness and versatility.

View on arXiv
@article{xu2025_2506.08670,
  title={ sparseGeoHOPCA: A Geometric Solution to Sparse Higher-Order PCA Without Covariance Estimation },
  author={ Renjie Xu and Chong Wu and Maolin Che and Zhuoheng Ran and Yimin Wei and Hong Yan },
  journal={arXiv preprint arXiv:2506.08670},
  year={ 2025 }
}
Comments on this paper