Improved Convergence Rates of Anderson Acceleration for a Large Class of Fixed-Point Iterations

Abstract
This paper studies Anderson acceleration (AA) for fixed-point methods . It provides the first proof that when the operator is linear and symmetric, AA improves the root-linear convergence factor over the fixed-point iterations. When is nonlinear, yet has a symmetric Jacobian at the solution, a slightly modified AA algorithm is proved to have an analogous root-linear convergence factor improvement over fixed-point iterations. Simulations verify our observations. Furthermore, experiments with different data models demonstrate AA is significantly superior to the standard fixed-point methods for Tyler's M-estimation.
View on arXivComments on this paper