Memory-Augmented Reinforcement Learning for Image-Goal Navigation
In this work, we present a memory-augmented approach for image-goal navigation. Our key hypothesis is that, while episodic reinforcement learning may be a convenient framework for tackling this task, embodied agents, once deployed, do not simply cease to exist after an episode has ended. They persist and so should their memories. Our approach leverages a cross-episode memory to learn to navigate. First, we train a state-embedding network in a self-supervised fashion, and then use it to embed previously-visited states into the agent's memory. Our navigation policy takes advantage of the information stored in the memory via an attention mechanism. We validate our approach through extensive evaluations, and show that our model establishes a new state of the art on the challenging Gibson dataset. We obtain this competitive performance from RGB input alone, without access to additional information such as position or depth.
View on arXiv