Representation and Invariance in Reinforcement Learning
Abstract
If we changed the rules, would the wise become fools? Different groups formalize reinforcement learning (RL) in different ways. If an agent in one RL framework is to run within another RL framework's environments, the agent must first be converted, or mapped, into that other framework. Whether or not this is possible depends on the RL frameworks in question and on how intelligence is measured. In this paper, we lay foundations for studying relative-intelligence-preserving mappability between RL frameworks.
View on arXivComments on this paper
