ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1410.7660
33
282

Non-convex Robust PCA

28 October 2014
Praneeth Netrapalli
U. Niranjan
Sujay Sanghavi
Anima Anandkumar
Prateek Jain
ArXivPDFHTML
Abstract

We propose a new method for robust PCA -- the task of recovering a low-rank matrix from sparse corruptions that are of unknown value and support. Our method involves alternating between projecting appropriate residuals onto the set of low-rank matrices, and the set of sparse matrices; each projection is {\em non-convex} but easy to compute. In spite of this non-convexity, we establish exact recovery of the low-rank matrix, under the same conditions that are required by existing methods (which are based on convex optimization). For an m×nm \times nm×n input matrix (m≤n)m \leq n)m≤n), our method has a running time of O(r2mn)O(r^2mn)O(r2mn) per iteration, and needs O(log⁡(1/ϵ))O(\log(1/\epsilon))O(log(1/ϵ)) iterations to reach an accuracy of ϵ\epsilonϵ. This is close to the running time of simple PCA via the power method, which requires O(rmn)O(rmn)O(rmn) per iteration, and O(log⁡(1/ϵ))O(\log(1/\epsilon))O(log(1/ϵ)) iterations. In contrast, existing methods for robust PCA, which are based on convex optimization, have O(m2n)O(m^2n)O(m2n) complexity per iteration, and take O(1/ϵ)O(1/\epsilon)O(1/ϵ) iterations, i.e., exponentially more iterations for the same accuracy. Experiments on both synthetic and real data establishes the improved speed and accuracy of our method over existing convex implementations.

View on arXiv
Comments on this paper