Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2011.01075
Cited By
A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting
2 November 2020
Philip Amortila
Nan Jiang
Tengyang Xie
OffRL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Variant of the Wang-Foster-Kakade Lower Bound for the Discounted Setting"
10 / 10 papers shown
Title
Model Selection for Off-policy Evaluation: New Algorithms and Experimental Protocol
Pai Liu
Lingfeng Zhao
Shivangi Agarwal
Jinghan Liu
Audrey Huang
Philip Amortila
Nan Jiang
OODD
OffRL
109
0
0
11 Feb 2025
The Optimal Approximation Factors in Misspecified Off-Policy Value Function Estimation
Philip Amortila
Nan Jiang
Csaba Szepesvári
OffRL
34
3
0
25 Jul 2023
A Complete Characterization of Linear Estimators for Offline Policy Evaluation
Juan C. Perdomo
A. Krishnamurthy
Peter L. Bartlett
Sham Kakade
OffRL
32
3
0
08 Mar 2022
The Impact of Data Distribution on Q-learning with Function Approximation
Pedro P. Santos
Diogo S. Carvalho
Alberto Sardinha
Francisco S. Melo
OffRL
19
2
0
23 Nov 2021
Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation
Dylan J. Foster
A. Krishnamurthy
D. Simchi-Levi
Yunzong Xu
OffRL
21
62
0
21 Nov 2021
Offline RL Without Off-Policy Evaluation
David Brandfonbrener
William F. Whitney
Rajesh Ranganath
Joan Bruna
OffRL
42
162
0
16 Jun 2021
An Exponential Lower Bound for Linearly-Realizable MDPs with Constant Suboptimality Gap
Yuanhao Wang
Ruosong Wang
Sham Kakade
OffRL
41
43
0
23 Mar 2021
Infinite-Horizon Offline Reinforcement Learning with Linear Function Approximation: Curse of Dimensionality and Algorithm
Lin Chen
B. Scherrer
Peter L. Bartlett
OffRL
83
16
0
17 Mar 2021
Instabilities of Offline RL with Pre-Trained Neural Representation
Ruosong Wang
Yifan Wu
Ruslan Salakhutdinov
Sham Kakade
OffRL
22
42
0
08 Mar 2021
Exponential Lower Bounds for Batch Reinforcement Learning: Batch RL can be Exponentially Harder than Online RL
Andrea Zanette
OffRL
28
71
0
14 Dec 2020
1