ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.16255
34
2

Uncertainty-Aware Reward-Free Exploration with General Function Approximation

24 June 2024
Junkai Zhang
Weitong Zhang
Dongruo Zhou
Q. Gu
ArXivPDFHTML
Abstract

Mastering multiple tasks through exploration and learning in an environment poses a significant challenge in reinforcement learning (RL). Unsupervised RL has been introduced to address this challenge by training policies with intrinsic rewards rather than extrinsic rewards. However, current intrinsic reward designs and unsupervised RL algorithms often overlook the heterogeneous nature of collected samples, thereby diminishing their sample efficiency. To overcome this limitation, in this paper, we propose a reward-free RL algorithm called \alg. The key idea behind our algorithm is an uncertainty-aware intrinsic reward for exploring the environment and an uncertainty-weighted learning process to handle heterogeneous uncertainty in different samples. Theoretically, we show that in order to find an ϵ\epsilonϵ-optimal policy, GFA-RFE needs to collect O~(H2log⁡NF(ϵ)dim(F)/ϵ2)\tilde{O} (H^2 \log N_{\mathcal F} (\epsilon) \mathrm{dim} (\mathcal F) / \epsilon^2 )O~(H2logNF​(ϵ)dim(F)/ϵ2) number of episodes, where F\mathcal FF is the value function class with covering number NF(ϵ)N_{\mathcal F} (\epsilon)NF​(ϵ) and generalized eluder dimension dim(F)\mathrm{dim} (\mathcal F)dim(F). Such a result outperforms all existing reward-free RL algorithms. We further implement and evaluate GFA-RFE across various domains and tasks in the DeepMind Control Suite. Experiment results show that GFA-RFE outperforms or is comparable to the performance of state-of-the-art unsupervised RL algorithms.

View on arXiv
Comments on this paper