On the Coupling of Depth and Egomotion Networks for Self-Supervised Structure from Motion

Structure from motion (SfM) has recently been formulated as a self-supervised learning problem, where neural network models of depth and egomotion are learned jointly through view synthesis. Herein, we address the open problem of how to best couple, or link, the depth and egomotion network components, so that information such as a common scale factor can be shared between the networks. Towards this end, we introduce several notions of coupling, categorize existing approaches, and present a novel tightly-coupled approach that leverages the interdependence of depth and egomotion at training time and at test time. Our approach uses iterative view synthesis to recursively update the egomotion network input, permitting contextual information to be passed between the components. We demonstrate through substantial experiments that our approach promotes consistency between the depth and egomotion predictions at test time, improves generalization, and leads to state-of-the-art accuracy on indoor and outdoor depth and egomotion evaluation benchmarks.
View on arXiv