ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.11563
  4. Cited By
Self-supervised network distillation: an effective approach to
  exploration in sparse reward environments

Self-supervised network distillation: an effective approach to exploration in sparse reward environments

22 February 2023
Matej Pecháč
M. Chovanec
Igor Farkaš
ArXivPDFHTML

Papers citing "Self-supervised network distillation: an effective approach to exploration in sparse reward environments"

2 / 2 papers shown
Title
PreND: Enhancing Intrinsic Motivation in Reinforcement Learning through
  Pre-trained Network Distillation
PreND: Enhancing Intrinsic Motivation in Reinforcement Learning through Pre-trained Network Distillation
Mohammadamin Davoodabadi
Negin Hashemi Dijujin
M. Baghshah
18
0
0
02 Oct 2024
An information-theoretic perspective on intrinsic motivation in
  reinforcement learning: a survey
An information-theoretic perspective on intrinsic motivation in reinforcement learning: a survey
A. Aubret
L. Matignon
S. Hassas
24
33
0
19 Sep 2022
1