ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01113
15
0

NeuroLoc: Encoding Navigation Cells for 6-DOF Camera Localization

2 May 2025
X. Li
Jian Yang
Fenli Jia
Muyu Wang
Qi Wu
Jun Wu
Jinpeng Mi
Jilin Hu
Peidong Liang
Xuan Tang
Ke Li
Xiong You
Xian Wei
ArXivPDFHTML
Abstract

Recently, camera localization has been widely adopted in autonomous robotic navigation due to its efficiency and convenience. However, autonomous navigation in unknown environments often suffers from scene ambiguity, environmental disturbances, and dynamic object transformation in camera localization. To address this problem, inspired by the biological brain navigation mechanism (such as grid cells, place cells, and head direction cells), we propose a novel neurobiological camera location method, namely NeuroLoc. Firstly, we designed a Hebbian learning module driven by place cells to save and replay historical information, aiming to restore the details of historical representations and solve the issue of scene fuzziness. Secondly, we utilized the head direction cell-inspired internal direction learning as multi-head attention embedding to help restore the true orientation in similar scenes. Finally, we added a 3D grid center prediction in the pose regression module to reduce the final wrong prediction. We evaluate the proposed NeuroLoc on commonly used benchmark indoor and outdoor datasets. The experimental results show that our NeuroLoc can enhance the robustness in complex environments and improve the performance of pose regression by using only a single image.

View on arXiv
@article{li2025_2505.01113,
  title={ NeuroLoc: Encoding Navigation Cells for 6-DOF Camera Localization },
  author={ Xun Li and Jian Yang and Fenli Jia and Muyu Wang and Qi Wu and Jun Wu and Jinpeng Mi and Jilin Hu and Peidong Liang and Xuan Tang and Ke Li and Xiong You and Xian Wei },
  journal={arXiv preprint arXiv:2505.01113},
  year={ 2025 }
}
Comments on this paper