Analysis of Agent Expertise in Ms. Pac-Man using
Value-of-Information-based Policies
Conventional reinforcement learning methods for Markov decision processes rely on weakly-guided, stochastic searches to drive the learning process. It can therefore be difficult to predict what agent behaviors might emerge. In this paper, we consider an information-theoretic approach for performing constrained stochastic searches that promote the formation of risk-averse to risk-favoring behaviors. Our approach is based on the value of information, a criterion that provides an optimal trade-off between the expected return of a policy and the policy's complexity. As the policy complexity is reduced, there is a high chance that the agents will eschew risky actions that increase the long-term rewards. The agents instead focus on simply completing their main objective in an expeditious fashion. As the policy complexity increases, the agents will take actions, regardless of the risk, that seek to decrease the long-term costs. A minimal-cost policy is sought in either case; the obtainable cost depends on a single, tunable parameter that regulates the degree of policy complexity. We evaluate the performance of value-of-information-based policies on a stochastic version of Ms. Pac-Man. A major component of this paper is demonstrating that ranges of policy complexity values yield different game-play styles and analyzing why this occurs. We show that low-complexity policies aim to only clear the environment of pellets while avoiding invulnerable ghosts. Higher-complexity policies implement multi-modal strategies that compel the agent to seek power-ups and chase after vulnerable ghosts, both of which reduce the long-term costs.
View on arXiv