Finite-Time Bound for Non-Linear Two-Time-Scale Stochastic Approximation

Two-time-scale stochastic approximation is an algorithm with coupled iterations which has found broad applications in reinforcement learning, optimization and game control. While several prior works have obtained a mean square error bound of for linear two-time-scale iterations, the best known bound in the non-linear contractive setting has been . In this work, we obtain an improved bound of for non-linear two-time-scale stochastic approximation. Our result applies to algorithms such as gradient descent-ascent and two-time-scale Lagrangian optimization. The key step in our analysis involves rewriting the original iteration in terms of an averaged noise sequence which decays sufficiently fast. Additionally, we use an induction-based approach to show that the iterates are bounded in expectation.
View on arXiv@article{chandak2025_2504.19375, title={ $O(1/k)$ Finite-Time Bound for Non-Linear Two-Time-Scale Stochastic Approximation }, author={ Siddharth Chandak }, journal={arXiv preprint arXiv:2504.19375}, year={ 2025 } }