ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.15755
  4. Cited By
Confident Approximate Policy Iteration for Efficient Local Planning in
  $q^π$-realizable MDPs

Confident Approximate Policy Iteration for Efficient Local Planning in qπq^πqπ-realizable MDPs

27 October 2022
Gellert Weisz
András Gyorgy
Tadashi Kozuno
Csaba Szepesvári
ArXivPDFHTML

Papers citing "Confident Approximate Policy Iteration for Efficient Local Planning in $q^π$-realizable MDPs"

4 / 4 papers shown
Title
Offline RL via Feature-Occupancy Gradient Ascent
Offline RL via Feature-Occupancy Gradient Ascent
Gergely Neu
Nneka Okolo
OffRL
31
0
0
22 May 2024
Regularization and Variance-Weighted Regression Achieves Minimax
  Optimality in Linear MDPs: Theory and Practice
Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice
Toshinori Kitamura
Tadashi Kozuno
Yunhao Tang
Nino Vieillard
Michal Valko
...
Olivier Pietquin
M. Geist
Csaba Szepesvári
Wataru Kumagai
Yutaka Matsuo
OffRL
30
2
0
22 May 2023
Sample Efficient Deep Reinforcement Learning via Local Planning
Sample Efficient Deep Reinforcement Learning via Local Planning
Dong Yin
S. Thiagarajan
N. Lazić
Nived Rajaraman
Botao Hao
Csaba Szepesvári
20
4
0
29 Jan 2023
Approximation Benefits of Policy Gradient Methods with Aggregated States
Approximation Benefits of Policy Gradient Methods with Aggregated States
Daniel Russo
38
7
0
22 Jul 2020
1