ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13185
  4. Cited By
Regularization and Variance-Weighted Regression Achieves Minimax
  Optimality in Linear MDPs: Theory and Practice

Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice

22 May 2023
Toshinori Kitamura
Tadashi Kozuno
Yunhao Tang
Nino Vieillard
Michal Valko
Wenhao Yang
Jincheng Mei
Pierre Ménard
M. G. Azar
Rémi Munos
Olivier Pietquin
M. Geist
Csaba Szepesvári
Wataru Kumagai
Yutaka Matsuo
    OffRL
ArXivPDFHTML

Papers citing "Regularization and Variance-Weighted Regression Achieves Minimax Optimality in Linear MDPs: Theory and Practice"

3 / 3 papers shown
Title
Sample-Efficiency in Multi-Batch Reinforcement Learning: The Need for
  Dimension-Dependent Adaptivity
Sample-Efficiency in Multi-Batch Reinforcement Learning: The Need for Dimension-Dependent Adaptivity
Emmeran Johnson
Ciara Pike-Burke
Patrick Rebeschini
OffRL
19
2
0
02 Oct 2023
Provable and Practical: Efficient Exploration in Reinforcement Learning
  via Langevin Monte Carlo
Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo
Haque Ishfaq
Qingfeng Lan
Pan Xu
A. R. Mahmood
Doina Precup
Anima Anandkumar
Kamyar Azizzadenesheli
BDL
OffRL
18
20
0
29 May 2023
Offline Reinforcement Learning with Differentiable Function
  Approximation is Provably Efficient
Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient
Ming Yin
Mengdi Wang
Yu-Xiang Wang
OffRL
61
11
0
03 Oct 2022
1