Mapping Semantic Segmentation to Point Clouds Using Structure from Motion for Forest Analysis

Although the use of remote sensing technologies for monitoring forested environments has gained increasing attention, publicly available point cloud datasets remain scarce due to the high costs, sensor requirements, and time-intensive nature of their acquisition. Moreover, as far as we are aware, there are no public annotated datasets generated through Structure From Motion (SfM) algorithms applied to imagery, which may be due to the lack of SfM algorithms that can map semantic segmentation information into an accurate point cloud, especially in a challenging environment like forests.In this work, we present a novel pipeline for generating semantically segmented point clouds of forest environments. Using a custom-built forest simulator, we generate realistic RGB images of diverse forest scenes along with their corresponding semantic segmentation masks. These labeled images are then processed using modified open-source SfM software capable of preserving semantic information during 3D reconstruction. The resulting point clouds provide both geometric and semantic detail, offering a valuable resource for training and evaluating deep learning models aimed at segmenting real forest point clouds obtained via SfM.
View on arXiv@article{capua2025_2505.10751, title={ Mapping Semantic Segmentation to Point Clouds Using Structure from Motion for Forest Analysis }, author={ Francisco Raverta Capua and Pablo De Cristoforis }, journal={arXiv preprint arXiv:2505.10751}, year={ 2025 } }