ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.00177
41
11

On Gap-dependent Bounds for Offline Reinforcement Learning

1 June 2022
Xinqi Wang
Qiwen Cui
S. Du
    OffRL
ArXivPDFHTML
Abstract

This paper presents a systematic study on gap-dependent sample complexity in offline reinforcement learning. Prior work showed when the density ratio between an optimal policy and the behavior policy is upper bounded (the optimal policy coverage assumption), then the agent can achieve an O(1ϵ2)O\left(\frac{1}{\epsilon^2}\right)O(ϵ21​) rate, which is also minimax optimal. We show under the optimal policy coverage assumption, the rate can be improved to O(1ϵ)O\left(\frac{1}{\epsilon}\right)O(ϵ1​) when there is a positive sub-optimality gap in the optimal QQQ-function. Furthermore, we show when the visitation probabilities of the behavior policy are uniformly lower bounded for states where an optimal policy's visitation probabilities are positive (the uniform optimal policy coverage assumption), the sample complexity of identifying an optimal policy is independent of 1ϵ\frac{1}{\epsilon}ϵ1​. Lastly, we present nearly-matching lower bounds to complement our gap-dependent upper bounds.

View on arXiv
Comments on this paper