ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.11604
23
3

Horizon-Free and Variance-Dependent Reinforcement Learning for Latent Markov Decision Processes

20 October 2022
Runlong Zhou
Ruosong Wang
S. Du
ArXivPDFHTML
Abstract

We study regret minimization for reinforcement learning (RL) in Latent Markov Decision Processes (LMDPs) with context in hindsight. We design a novel model-based algorithmic framework which can be instantiated with both a model-optimistic and a value-optimistic solver. We prove an O~(Var⋆MΓSAK)\tilde{O}(\sqrt{\mathsf{Var}^\star M \Gamma S A K})O~(Var⋆MΓSAK​) regret bound where O~\tilde{O}O~ hides logarithm factors, MMM is the number of contexts, SSS is the number of states, AAA is the number of actions, KKK is the number of episodes, Γ≤S\Gamma \le SΓ≤S is the maximum transition degree of any state-action pair, and Var⋆\mathsf{Var}^\starVar⋆ is a variance quantity describing the determinism of the LMDP. The regret bound only scales logarithmically with the planning horizon, thus yielding the first (nearly) horizon-free regret bound for LMDP. This is also the first problem-dependent regret bound for LMDP. Key in our proof is an analysis of the total variance of alpha vectors (a generalization of value functions), which is handled with a truncation method. We complement our positive result with a novel Ω(Var⋆MSAK)\Omega(\sqrt{\mathsf{Var}^\star M S A K})Ω(Var⋆MSAK​) regret lower bound with Γ=2\Gamma = 2Γ=2, which shows our upper bound minimax optimal when Γ\GammaΓ is a constant for the class of variance-bounded LMDPs. Our lower bound relies on new constructions of hard instances and an argument inspired by the symmetrization technique from theoretical computer science, both of which are technically different from existing lower bound proof for MDPs, and thus can be of independent interest.

View on arXiv
Comments on this paper