ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.15931
10
2

Posterior Sampling for Continuing Environments

29 November 2022
Wanqiao Xu
Shi Dong
Benjamin Van Roy
ArXivPDFHTML
Abstract

We develop an extension of posterior sampling for reinforcement learning (PSRL) that is suited for a continuing agent-environment interface and integrates naturally into agent designs that scale to complex environments. The approach, continuing PSRL, maintains a statistically plausible model of the environment and follows a policy that maximizes expected γ\gammaγ-discounted return in that model. At each time, with probability 1−γ1-\gamma1−γ, the model is replaced by a sample from the posterior distribution over environments. For a choice of discount factor that suitably depends on the horizon TTT, we establish an O~(τSAT)\tilde{O}(\tau S \sqrt{A T})O~(τSAT​) bound on the Bayesian regret, where SSS is the number of environment states, AAA is the number of actions, and τ\tauτ denotes the reward averaging time, which is a bound on the duration required to accurately estimate the average reward of any policy. Our work is the first to formalize and rigorously analyze the resampling approach with randomized exploration.

View on arXiv
Comments on this paper