We propose a hierarchical reinforcement learning (HRL) framework for efficient Navigation Among Movable Obstacles (NAMO) using a mobile manipulator. Our approach combines interaction-based obstacle property estimation with structured pushing strategies, facilitating the dynamic manipulation of unforeseen obstacles while adhering to a pre-planned global path. The high-level policy generates pushing commands that consider environmental constraints and path-tracking objectives, while the low-level policy precisely and stably executes these commands through coordinated whole-body movements. Comprehensive simulation-based experiments demonstrate improvements in performing NAMO tasks, including higher success rates, shortened traversed path length, and reduced goal-reaching times, compared to baselines. Additionally, ablation studies assess the efficacy of each component, while a qualitative analysis further validates the accuracy and reliability of the real-time obstacle property estimation.
View on arXiv@article{yang2025_2506.15380, title={ Efficient Navigation Among Movable Obstacles using a Mobile Manipulator via Hierarchical Policy Learning }, author={ Taegeun Yang and Jiwoo Hwang and Jeil Jeong and Minsung Yoon and Sung-Eui Yoon }, journal={arXiv preprint arXiv:2506.15380}, year={ 2025 } }