Coupled Regression and Modeling of Depth from Single View
In this work we present a novel efficient strategy for depth estimation from single images. We formulate the task as a regression problem from a holistic representation of the image to the depth space. The input is represented by the activation values of a deep convolutional network optimized for scene classification on a large collection of examples. We demonstrate that this representation provides valuable information about the global structure and the context in the scene, which can be leveraged for accurate depth prediction. However, rather than directly regressing on depth, we propose to learn a compact depth basis and a mapping from image features to reconstructive depth weights. Crucially, the basis and the mapping are coupled and jointly optimized by our learning scheme. We demonstrate that this results in a significant improvement in accuracy compared to direct depth regression or approaches using depth dictionaries learned disjointly from the mapping. Finally, we show that our global depth prediction can be improved by local refinements performed by considering pixel-level deep features. Experiments on the NYUv2 and KITTI datasets show that our method outperforms the existing state-of-theart by a large margin at a considerably lower computational cost for training
View on arXiv