All Papers
Title |
|---|
Title |
|---|

Objects are three-dimensional entities, but visual observations are largely 2D. Inferring 3D properties from individual 2D views is thus a generically useful skill that is critical to object perception. We ask the question: can we learn useful image representations by explicitly training a system to infer 3D shape from 2D views? The few prior attempts at single view 3D reconstruction all target the reconstruction task as an end in itself, and largely build category-specific models to get better reconstructions. In contrast, we are interested in this task as a means to learn generic visual representations that embed knowledge of 3D shape properties from arbitrary object views. We train a single category-agnostic neural network from scratch to produce a complete image-based shape representation from one view of a generic object in a single forward pass. Through comparison against several baselines on widely used shape datasets, we show that our system learns to infer shape for generic objects including even those from categories that are not present in the training set. In order to perform this "mental rotation" task, our system is forced to learn intermediate image representations that embed object geometry, without requiring any manual supervision. We show that these learned representations outperform other unsupervised representations on various semantic tasks, such as object recognition and object retrieval.
View on arXiv