ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1701.08493
97
391
v1v2 (latest)

A Survey on Structure from Motion

30 January 2017
Onur Özyesil
V. Voroninski
Ronen Basri
A. Singer
ArXiv (abs)PDFHTML
Abstract

The structure from motion (SfM) problem in computer vision is the problem of recovering the 333D structure of a stationary scene from a set of projective measurements, represented as a collection of 222D images, via estimation of motion of the cameras corresponding to these images. In essence, SfM involves the three main stages of (1) extraction of features in images (e.g., points of interest, lines, etc.) and matching of these features between images, (2) camera motion estimation (e.g., using relative pairwise camera poses estimated from the extracted features), (3) recovery of the 333D structure using the estimated motion and features (e.g., by minimizing the so-called reprojection error). This survey mainly focuses on the relatively recent developments in the literature pertaining to stages (2) and (3). More specifically, after touching upon the early factorization-based techniques for motion and structure estimation, we provide a detailed account of some of the recent camera location estimation methods in the literature, which precedes the discussion of notable techniques for 333D structure recovery. We also cover the basics of the simultaneous localization and mapping (SLAM) problem, which can be considered to be a specific case of the SfM problem. Additionally, a review of the fundamentals of feature extraction and matching (i.e., stage (1) above), various recent methods for handling ambiguities in 333D scenes, SfM techniques involving relatively uncommon camera models and image features, and popular sources of data and SfM software is included in our survey.

View on arXiv
Comments on this paper