Stronger Approximate Singular Value Decomposition via the Block Lanczos
and Power Methods

We re-analyze Simultaneous Power Iteration and the Block Lanczos methods, two classical iterative algorithms for the singular value decomposition (SVD). We are interested in convergence bounds that *do not depend* on properties of the input matrix (e.g. singular value gaps). Simultaneous Iteration is known to give a low rank approximation within of optimal for spectral norm error in iterations. We strengthen this result, proving that it finds approximate principal components very close in quality to those given by an exact SVD. Our work bridges a divide between classical analysis, which can give similar bounds but depends critically on singular value gaps, and more recent work, which only focuses on low rank approximation Furthermore, we extend our bounds to the Block Lanczos method, which we show obtains the same approximation guarantees in just iterations, giving the fastest known algorithm for spectral norm low rank approximation and principal component approximation. Despite their popularity, Krylov subspace methods like Block Lanczos previously seemed more difficult to analyze and did not come with rigorous gap-independent guarantees. Finally, we give insight beyond the worst case, justifying why Simultaneous Power Iteration and Block Lanczos can run much faster in practice than predicted. We clarify how simple techniques can potentially accelerate both algorithms significantly.
View on arXiv