ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2208.08938
19
0

Meta Sparse Principal Component Analysis

18 August 2022
Imon Banerjee
Jean Honorio
ArXivPDFHTML
Abstract

We study the meta-learning for support (i.e. the set of non-zero entries) recovery in high-dimensional Principal Component Analysis. We reduce the sufficient sample complexity in a novel task with the information that is learned from auxiliary tasks. We assume each task to be a different random Principal Component (PC) matrix with a possibly different support and that the support union of the PC matrices is small. We then pool the data from all the tasks to execute an improper estimation of a single PC matrix by maximising the l1l_1l1​-regularised predictive covariance to establish that with high probability the true support union can be recovered provided a sufficient number of tasks mmm and a sufficient number of samples O(log⁡(p)m) O\left(\frac{\log(p)}{m}\right)O(mlog(p)​) for each task, for ppp-dimensional vectors. Then, for a novel task, we prove that the maximisation of the l1l_1l1​-regularised predictive covariance with the additional constraint that the support is a subset of the estimated support union could reduce the sufficient sample complexity of successful support recovery to O(log⁡∣J∣)O(\log |J|)O(log∣J∣), where JJJ is the support union recovered from the auxiliary tasks. Typically, ∣J∣|J|∣J∣ would be much less than ppp for sparse matrices. Finally, we demonstrate the validity of our experiments through numerical simulations.

View on arXiv
Comments on this paper