420

O(1/k)O(1/k) Finite-Time Bound for Non-Linear Two-Time-Scale Stochastic Approximation

Main:19 Pages
Bibliography:3 Pages
Abstract

Two-time-scale stochastic approximation is an algorithm with coupled iterations which has found broad applications in reinforcement learning, optimization and game control. While several prior works have obtained a mean square error bound of O(1/k)O(1/k) for linear two-time-scale iterations, the best known bound in the non-linear contractive setting has been O(1/k2/3)O(1/k^{2/3}). In this work, we obtain an improved bound of O(1/k)O(1/k) for non-linear two-time-scale stochastic approximation. Our result applies to algorithms such as gradient descent-ascent and two-time-scale Lagrangian optimization. The key step in our analysis involves rewriting the original iteration in terms of an averaged noise sequence which decays sufficiently fast. Additionally, we use an induction-based approach to show that the iterates are bounded in expectation.

View on arXiv
Comments on this paper