Robust Shift-and-Invert Preconditioning: Faster and More Sample Efficient Algorithms for Eigenvector Computation

We provide faster algorithms and improved sample complexities for approximating the top eigenvector of a matrix. Offline Setting: Given an matrix , we show how to compute an approximate top eigenvector in time and . Here is the stable rank and is the multiplicative eigenvalue gap. By separating the dependence from we improve on the classic power and Lanczos methods. We also improve prior work using fast subspace embeddings and stochastic optimization, giving significantly improved dependencies on and . Our second running time improves this further when . Online Setting: Given a distribution with covariance matrix and a vector which is an approximate top eigenvector for , we show how to refine to an approximation using samples from . Here is a natural variance measure. Combining our algorithm with previous work to initialize , we obtain a number of improved sample complexity and runtime results. For general distributions, we achieve asymptotically optimal accuracy as a function of sample size as the number of samples grows large. Our results center around a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply fast SVRG based approximate system solvers to achieve our claims. We believe our results suggest the general effectiveness of shift-and-invert based approaches and imply that further computational gains may be reaped in practice.
View on arXiv