195
v1v2 (latest)

Bootstrapped Reward Shaping

AAAI Conference on Artificial Intelligence (AAAI), 2025
Main:7 Pages
6 Figures
Bibliography:2 Pages
2 Tables
Appendix:3 Pages
Abstract

In reinforcement learning, especially in sparse-reward domains, many environment steps are required to observe reward information. In order to increase the frequency of such observations, "potential-based reward shaping" (PBRS) has been proposed as a method of providing a more dense reward signal while leaving the optimal policy invariant. However, the required "potential function" must be carefully designed with task-dependent knowledge to not deter training performance. In this work, we propose a "bootstrapped" method of reward shaping, termed BSRS, in which the agent's current estimate of the state-value function acts as the potential function for PBRS. We provide convergence proofs for the tabular setting, give insights into training dynamics for deep RL, and show that the proposed method improves training speed in the Atari suite.

View on arXiv
Comments on this paper