Tight Finite Time Bounds of Two-Time-Scale Linear Stochastic Approximation with Markovian Noise

Stochastic approximation (SA) is an iterative algorithm for finding the fixed point of an operator using noisy samples and widely used in optimization and Reinforcement Learning (RL). The noise in RL exhibits a Markovian structure, and in some cases, such as gradient temporal difference (GTD) methods, SA is employed in a two-time-scale framework. This combination introduces significant theoretical challenges for analysis.We derive an upper bound on the error for the iterations of linear two-time-scale SA with Markovian noise. We demonstrate that the mean squared error decreases as where is the number of iterates, and is an appropriately defined covariance matrix. A key feature of our bounds is that the leading term, , exactly matches with the covariance in the Central Limit Theorem (CLT) for the two-time-scale SA, and we call them tight finite-time bounds. We illustrate their use in RL by establishing sample complexity for off-policy algorithms, TDC, GTD, and GTD2.A special case of linear two-time-scale SA that is extensively studied is linear SA with Polyak-Ruppert averaging. We present tight finite time bounds corresponding to the covariance matrix of the CLT. Such bounds can be used to study TD-learning with Polyak-Ruppert averaging.
View on arXiv@article{haque2025_2401.00364, title={ Tight Finite Time Bounds of Two-Time-Scale Linear Stochastic Approximation with Markovian Noise }, author={ Shaan Ul Haque and Sajad Khodadadian and Siva Theja Maguluri }, journal={arXiv preprint arXiv:2401.00364}, year={ 2025 } }