ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.03969
18
2

Speak the Same Language: Global LiDAR Registration on BIM Using Pose Hough Transform

7 May 2024
Zhijian Qiao
Haoming Huang
Chuhao Liu
Zehuan Yu
Shaojie Shen
Fumin Zhang
Huan Yin
ArXivPDFHTML
Abstract

Light detection and ranging (LiDAR) point clouds and building information modeling (BIM) represent two distinct data modalities in the fields of robot perception and construction. These modalities originate from different sources and are associated with unique reference frames. The primary goal of this study is to align these modalities within a shared reference frame using a global registration approach, effectively enabling them to ``speak the same language''. To achieve this, we propose a cross-modality registration method, spanning from the front end to the back end. At the front end, we extract triangle descriptors by identifying walls and intersected corners, enabling the matching of corner triplets with a complexity independent of the BIM's size. For the back-end transformation estimation, we utilize the Hough transform to map the matched triplets to the transformation space and introduce a hierarchical voting mechanism to hypothesize multiple pose candidates. The final transformation is then verified using our designed occupancy-aware scoring method. To assess the effectiveness of our approach, we conducted real-world multi-session experiments in a large-scale university building, employing two different types of LiDAR sensors. We make the collected datasets and codes publicly available to benefit the community.

View on arXiv
@article{qiao2025_2405.03969,
  title={ Speak the Same Language: Global LiDAR Registration on BIM Using Pose Hough Transform },
  author={ Zhijian Qiao and Haoming Huang and Chuhao Liu and Zehuan Yu and Shaojie Shen and Fumin Zhang and Huan Yin },
  journal={arXiv preprint arXiv:2405.03969},
  year={ 2025 }
}
Comments on this paper