ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.08754
20
3638

Faster Eigenvector Computation via Shift-and-Invert Preconditioning

26 May 2016
Dan Garber
Laurent Dinh
Chi Jin
Jascha Narain Sohl-Dickstein
Samy Bengio
Praneeth Netrapalli
Aaron Sidford
ArXivPDFHTML
Abstract

We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix Σ\SigmaΣ -- i.e. computing a unit vector xxx such that xTΣx≥(1−ϵ)λ1(Σ)x^T \Sigma x \ge (1-\epsilon)\lambda_1(\Sigma)xTΣx≥(1−ϵ)λ1​(Σ): Offline Eigenvector Estimation: Given an explicit A∈Rn×dA \in \mathbb{R}^{n \times d}A∈Rn×d with Σ=ATA\Sigma = A^TAΣ=ATA, we show how to compute an ϵ\epsilonϵ approximate top eigenvector in time O~([nnz(A)+d∗sr(A)gap2]∗log⁡1/ϵ)\tilde O([nnz(A) + \frac{d*sr(A)}{gap^2} ]* \log 1/\epsilon )O~([nnz(A)+gap2d∗sr(A)​]∗log1/ϵ) and O~([nnz(A)3/4(d∗sr(A))1/4gap]∗log⁡1/ϵ)\tilde O([\frac{nnz(A)^{3/4} (d*sr(A))^{1/4}}{\sqrt{gap}} ] * \log 1/\epsilon )O~([gap​nnz(A)3/4(d∗sr(A))1/4​]∗log1/ϵ). Here nnz(A)nnz(A)nnz(A) is the number of nonzeros in AAA, sr(A)sr(A)sr(A) is the stable rank, gapgapgap is the relative eigengap. By separating the gapgapgap dependence from the nnz(A)nnz(A)nnz(A) term, our first runtime improves upon the classical power and Lanczos methods. It also improves prior work using fast subspace embeddings [AC09, CW13] and stochastic optimization [Sha15c], giving significantly better dependencies on sr(A)sr(A)sr(A) and ϵ\epsilonϵ. Our second running time improves these further when nnz(A)≤d∗sr(A)gap2nnz(A) \le \frac{d*sr(A)}{gap^2}nnz(A)≤gap2d∗sr(A)​. Online Eigenvector Estimation: Given a distribution DDD with covariance matrix Σ\SigmaΣ and a vector x0x_0x0​ which is an O(gap)O(gap)O(gap) approximate top eigenvector for Σ\SigmaΣ, we show how to refine to an ϵ\epsilonϵ approximation using O(var(D)gap∗ϵ) O(\frac{var(D)}{gap*\epsilon})O(gap∗ϵvar(D)​) samples from DDD. Here var(D)var(D)var(D) is a natural notion of variance. Combining our algorithm with previous work to initialize x0x_0x0​, we obtain improved sample complexity and runtime results under a variety of assumptions on DDD. We achieve our results using a general framework that we believe is of independent interest. We give a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply fast stochastic variance reduced gradient (SVRG) based system solvers to achieve our claims.

View on arXiv
Comments on this paper