ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.13884
  4. Cited By
Constant Stepsize Q-learning: Distributional Convergence, Bias and
  Extrapolation

Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation

25 January 2024
Yixuan Zhang
Qiaomin Xie
ArXivPDFHTML

Papers citing "Constant Stepsize Q-learning: Distributional Convergence, Bias and Extrapolation"

3 / 3 papers shown
Title
A Piecewise Lyapunov Analysis of Sub-quadratic SGD: Applications to Robust and Quantile Regression
A Piecewise Lyapunov Analysis of Sub-quadratic SGD: Applications to Robust and Quantile Regression
Yixuan Zhang
Dongyan
Yudong Chen
Qiaomin Xie
21
0
0
11 Apr 2025
Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation
Nonasymptotic Analysis of Stochastic Gradient Descent with the Richardson-Romberg Extrapolation
Marina Sheshukova
Denis Belomestny
Alain Durmus
Eric Moulines
Alexey Naumov
S. Samsonov
25
1
0
07 Oct 2024
Stochastic Gradient Descent with Dependent Data for Offline
  Reinforcement Learning
Stochastic Gradient Descent with Dependent Data for Offline Reinforcement Learning
Jing-rong Dong
Xin T. Tong
OffRL
12
2
0
06 Feb 2022
1