ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.14285
23
5

Computing the Bias of Constant-step Stochastic Approximation with Markovian Noise

23 May 2024
Sebastian Allmeier
Nicolas Gast
ArXivPDFHTML
Abstract

We study stochastic approximation algorithms with Markovian noise and constant step-size α\alphaα. We develop a method based on infinitesimal generator comparisons to study the bias of the algorithm, which is the expected difference between θn\theta_nθn​ -- the value at iteration nnn -- and θ∗\theta^*θ∗ -- the unique equilibrium of the corresponding ODE. We show that, under some smoothness conditions, this bias is of order O(α)O(\alpha)O(α). Furthermore, we show that the time-averaged bias is equal to αV+O(α2)\alpha V + O(\alpha^2)αV+O(α2), where VVV is a constant characterized by a Lyapunov equation, showing that \espθˉn≈θ∗+Vα+O(α2)\esp{\bar{\theta}_n} \approx \theta^*+V\alpha + O(\alpha^2)\espθˉn​≈θ∗+Vα+O(α2), where θˉn=(1/n)∑k=1nθk\bar{\theta}_n=(1/n)\sum_{k=1}^n\theta_kθˉn​=(1/n)∑k=1n​θk​ is the Polyak-Ruppert average. We also show that θˉn\bar{\theta}_nθˉn​ converges with high probability around θ∗+αV\theta^*+\alpha Vθ∗+αV. We illustrate how to combine this with Richardson-Romberg extrapolation to derive an iterative scheme with a bias of order O(α2)O(\alpha^2)O(α2).

View on arXiv
Comments on this paper