ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.05927
32
1
v1v2v3 (latest)

On Using Hamiltonian Monte Carlo Sampling for Reinforcement Learning Problems in High-dimension

11 November 2020
Udari Madhushani
Biswadip Dey
Naomi Ehrich Leonard
Amit Chakraborty
    OffRL
ArXiv (abs)PDFHTML
Abstract

Value function based reinforcement learning (RL) algorithms, for example, QQQ-learning, learn optimal policies from datasets of actions, rewards, and state transitions. However, when the underlying state transition dynamics are stochastic and evolve on a high-dimensional space, generating independent and identically distributed (IID) data samples for creating these datasets poses a significant challenge due to the intractability of the associated normalizing integral. In these scenarios, Hamiltonian Monte Carlo (HMC) sampling offers a computationally tractable way to generate data for training RL algorithms. In this paper, we introduce a framework, called \textit{Hamiltonian QQQ-Learning}, that demonstrates, both theoretically and empirically, that QQQ values can be learned from a dataset generated by HMC samples of actions, rewards, and state transitions. Furthermore, to exploit the underlying low-rank structure of the QQQ function, Hamiltonian QQQ-Learning uses a matrix completion algorithm for reconstructing the updated QQQ function from QQQ value updates over a much smaller subset of state-action pairs. Thus, by providing an efficient way to apply QQQ-learning in stochastic, high-dimensional settings, the proposed approach broadens the scope of RL algorithms for real-world applications.

View on arXiv
Comments on this paper