ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.18144
  4. Cited By
Improving Intrinsic Exploration by Creating Stationary Objectives

Improving Intrinsic Exploration by Creating Stationary Objectives

27 October 2023
Roger Creus Castanyer
Javier Civera
Taihú Pire
    OffRL
ArXivPDFHTML

Papers citing "Improving Intrinsic Exploration by Creating Stationary Objectives"

4 / 4 papers shown
Title
RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning
RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning
Mingqi Yuan
Roger Creus Castanyer
Bo Li
Xin Jin
Glen Berseth
Wenjun Zeng
29
0
0
29 May 2024
Augmenting Unsupervised Reinforcement Learning with Self-Reference
Augmenting Unsupervised Reinforcement Learning with Self-Reference
Andrew Zhao
Erle Zhu
Rui Lu
Matthieu Lin
Yong-Jin Liu
Gao Huang
SSL
14
1
0
16 Nov 2023
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning
  Research
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
Mikayel Samvelyan
Robert Kirk
Vitaly Kurin
Jack Parker-Holder
Minqi Jiang
Eric Hambro
Fabio Petroni
Heinrich Küttler
Edward Grefenstette
Tim Rocktaschel
OffRL
226
89
0
27 Sep 2021
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
329
1,944
0
04 May 2020
1