ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.00701
  4. Cited By
Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning
  with Linear Function Approximation

Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation

3 October 2022
Dan Qiao
Yu-Xiang Wang
    OffRL
ArXivPDFHTML

Papers citing "Near-Optimal Deployment Efficiency in Reward-Free Reinforcement Learning with Linear Function Approximation"

3 / 3 papers shown
Title
First-Order Regret in Reinforcement Learning with Linear Function
  Approximation: A Robust Estimation Approach
First-Order Regret in Reinforcement Learning with Linear Function Approximation: A Robust Estimation Approach
Andrew Wagenmaker
Yifang Chen
Max Simchowitz
S. Du
Kevin G. Jamieson
63
31
0
07 Dec 2021
Provably Efficient Reinforcement Learning with Linear Function
  Approximation Under Adaptivity Constraints
Provably Efficient Reinforcement Learning with Linear Function Approximation Under Adaptivity Constraints
Chi Jin
Zhuoran Yang
Zhaoran Wang
OffRL
91
165
0
06 Jan 2021
Reward-Free Exploration for Reinforcement Learning
Reward-Free Exploration for Reinforcement Learning
Chi Jin
A. Krishnamurthy
Max Simchowitz
Tiancheng Yu
OffRL
85
181
0
07 Feb 2020
1