Reinforcement learning algorithms have performed well in playing challenging board and video games. More and more research work focus on improving the generalisation ability of reinforcement learning algorithms. The General Video Game AI Learning Competition aims at designing agents that are capable of learning to play different game levels that were unseen during training. This paper summarises the five years' General Video Game AI Learning Competition. At each edition, three new games were designed. For each game, three test levels were generated by perturbing or combining two training levels. Then, we present a novel reinforcement learning framework with dual-observation for general video game playing, under the assumption that it is more likely to observe similar local information in different levels rather than global information. Therefore, instead of directly inputting a single, raw pixel-based screenshot of current game screen, our proposed framework takes the encoded, transformed global and local observations of the game screen as two simultaneous inputs, aiming at learning local information for playing new levels. Our proposed framework is implemented with three state-of-the-art reinforcement learning algorithms and tested on the game set of the 2020 General Video Game AI Learning Competition. Ablation studies show the outstanding performance of using encoded, transformed global and local observations as input. The overall best performed agent is further used as a baseline in the 2021 competition edition.
View on arXiv