This manuscript proposes a probabilistic framework for algorithms that iteratively solve unconstrained linear problems Bx = b with positive definite B for x. The goal is to retain, at any time, instead of a point estimate, a Gaussian posterior belief over the elements of the inverse of B. Extending recent probabilistic interpretations of the secant family of quasi-Newton numerical optimization algorithms, and combining them with properties of the conjugate gradient algorithm, leads to uncertainty-calibrated methods that have very limited cost overhead over conjugate gradients, a self-contained novel interpretation of the quasi-Newton and conjugate gradient algorithms, and a foundation for new nonlinear optimization methods.
View on arXiv