ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.00304
16
0

Reinforcement Learning with Continuous Actions Under Unmeasured Confounding

1 May 2025
Yuhan Li
Eugene Han
Yifan Hu
Wenzhuo Zhou
Zhengling Qi
Yifan Cui
Ruoqing Zhu
    OffRL
ArXivPDFHTML
Abstract

This paper addresses the challenge of offline policy learning in reinforcement learning with continuous action spaces when unmeasured confounders are present. While most existing research focuses on policy evaluation within partially observable Markov decision processes (POMDPs) and assumes discrete action spaces, we advance this field by establishing a novel identification result to enable the nonparametric estimation of policy value for a given target policy under an infinite-horizon framework. Leveraging this identification, we develop a minimax estimator and introduce a policy-gradient-based algorithm to identify the in-class optimal policy that maximizes the estimated policy value. Furthermore, we provide theoretical results regarding the consistency, finite-sample error bound, and regret bound of the resulting optimal policy. Extensive simulations and a real-world application using the German Family Panel data demonstrate the effectiveness of our proposed methodology.

View on arXiv
@article{li2025_2505.00304,
  title={ Reinforcement Learning with Continuous Actions Under Unmeasured Confounding },
  author={ Yuhan Li and Eugene Han and Yifan Hu and Wenzhuo Zhou and Zhengling Qi and Yifan Cui and Ruoqing Zhu },
  journal={arXiv preprint arXiv:2505.00304},
  year={ 2025 }
}
Comments on this paper