Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2207.13081
Cited By
Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
26 July 2022
Masatoshi Uehara
Haruka Kiyohara
Andrew Bennett
Victor Chernozhukov
Nan Jiang
Nathan Kallus
C. Shi
Wen Sun
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Future-Dependent Value-Based Off-Policy Evaluation in POMDPs"
10 / 10 papers shown
Title
Reinforcement Learning with Continuous Actions Under Unmeasured Confounding
Yuhan Li
Eugene Han
Yifan Hu
Wenzhuo Zhou
Zhengling Qi
Yifan Cui
Ruoqing Zhu
OffRL
55
0
0
01 May 2025
On the Curses of Future and History in Future-dependent Value Functions for Off-policy Evaluation
Yuheng Zhang
Nan Jiang
OffRL
22
4
0
22 Feb 2024
Offline Policy Evaluation and Optimization under Confounding
Chinmaya Kausik
Yangyi Lu
Kevin Tan
Maggie Makar
Yixin Wang
Ambuj Tewari
OffRL
6
8
0
29 Nov 2022
Partially Observable RL with B-Stability: Unified Structural Condition and Sharp Sample-Efficient Algorithms
Fan Chen
Yu Bai
Song Mei
51
22
0
29 Sep 2022
Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems
Masatoshi Uehara
Ayush Sekhari
Jason D. Lee
Nathan Kallus
Wen Sun
OffRL
47
31
0
24 Jun 2022
Pessimism in the Face of Confounders: Provably Efficient Offline Reinforcement Learning in Partially Observable Markov Decision Processes
Miao Lu
Yifei Min
Zhaoran Wang
Zhuoran Yang
OffRL
38
22
0
26 May 2022
Off-Policy Evaluation in Partially Observed Markov Decision Processes under Sequential Ignorability
Yupeng Tang
Seung-seob Lee
OffRL
34
22
0
24 Oct 2021
Pessimistic Model-based Offline Reinforcement Learning under Partial Coverage
Masatoshi Uehara
Wen Sun
OffRL
91
144
0
13 Jul 2021
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
321
1,944
0
04 May 2020
Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes
Nathan Kallus
Masatoshi Uehara
OffRL
29
180
0
22 Aug 2019
1