Convergence of Batch Asynchronous Stochastic Approximation With Applications to Reinforcement Learning

Ever since its introduction in the classic paper of Robbins and Monro in 1951, Stochastic Approximation (SA) has become a standard tool for finding a solution of an equation of the form , when only noisy measurements of are available. In most situations, \textit{every component} of the putative solution is updated at each step . In some applications such as -learning, a key technique in Reinforcement Learning (RL), \textit{only one component} of is updated at each . This is known as \textbf{asynchronous} SA. The topic of study in the present paper is to study \textbf{Block Asynchronous SA (BASA)}, in which, at each step , \textit{some but not necessarily all} components of are updated. The theory presented here embraces both conventional (synchronous) SA as well as asynchronous SA, and all in-between possibilities. We also prove bounds on the \textit{rate} of convergence of to the solutions. As a prelude to the new results, we also briefly survey some results on the convergence of the Stochastic Gradient method, proved in a companion paper by the present authors.
View on arXiv