Learning Deep Generative Spatial Models for Mobile Robots
- GAN
We propose a new probabilistic framework that allows mobile robots to autonomously learn deep generative models of their environments that span multiple levels of abstraction. Unlike traditional approaches that attempt to integrate separately engineered components for low-level features, geometry, and semantic representations, our approach leverages recent advances in sum-product networks (SPNs) and deep learning to learn a unified deep model of a robot's spatial environment, from low-level representations to semantic interpretations. Our results, based on laser range finder data from a mobile robot, demonstrate that the proposed approach can learn the geometry of places and is a versatile platform for solving tasks ranging from semantic classification of places, uncertainty estimation and novelty detection to generation of place appearances based on semantic information and prediction of missing data in partial observations.
View on arXiv