26
17

Concentration of Contractive Stochastic Approximation and Reinforcement Learning

Abstract

Using a martingale concentration inequality, concentration bounds `from time n0n_0 on' are derived for stochastic approximation algorithms with contractive maps and both martingale difference and Markov noises. These are applied to reinforcement learning algorithms, in particular to asynchronous Q-learning and TD(0).

View on arXiv
Comments on this paper