P3Nav: A Unified Framework for Embodied Navigation Integrating Perception, Planning, and Prediction

In language-guided visual navigation, agents locate target objects in unseen environments using natural language instructions. For reliable navigation in unfamiliar scenes, agents must possess strong perception, planning, and prediction capabilities. Additionally, when agents revisit previously explored areas during long-term navigation, they may retain irrelevant and redundant historical perceptions, leading to suboptimal results. In this work, we introduce \textbf{P3Nav}, a unified framework that integrates \textbf{P}erception, \textbf{P}lanning, and \textbf{P}rediction capabilities through \textbf{Multitask Collaboration} on navigation and embodied question answering (EQA) tasks, thereby enhancing navigation performance. Furthermore, P3Nav employs an \textbf{Adaptive 3D-aware History Sampling} strategy to effectively and efficiently utilize historical observations. By leveraging the large language models (LLM), P3Nav comprehends diverse commands and complex visual scenes, resulting in appropriate navigation actions. P3Nav achieves a 75\% success rate in object goal navigation on the - benchmark, setting a new state-of-the-art performance.
View on arXiv@article{zhong2025_2503.18525, title={ P3Nav: A Unified Framework for Embodied Navigation Integrating Perception, Planning, and Prediction }, author={ Yufeng Zhong and Chengjian Feng and Feng Yan and Fanfan Liu and Liming Zheng and Lin Ma }, journal={arXiv preprint arXiv:2503.18525}, year={ 2025 } }