Imagination-Augmented Agents for Deep Reinforcement Learning
T. Weber
S. Racanière
David P. Reichert
Lars Buesing
A. Guez
Danilo Jimenez Rezende
Adria Puigdomenech Badia
Oriol Vinyals
N. Heess
Yujia Li
Razvan Pascanu
Peter W. Battaglia
Demis Hassabis
David Silver
Daan Wierstra

Abstract
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
View on arXivComments on this paper