Virtual Guidance as a Mid-level Representation for Navigation with Augmented Reality

In the context of autonomous navigation, effectively conveying abstract navigational cues to agents in dynamic environments presents significant challenges, particularly when navigation information is derived from diverse modalities such as both vision and high-level language descriptions. To address this issue, we introduce a novel technique termed `Virtual Guidance,' which is designed to visually represent non-visual instructional signals. These visual cues are overlaid onto the agent's camera view and served as comprehensible navigational guidance signals. To validate the concept of virtual guidance, we propose a sim-to-real framework that enables the transfer of the trained policy from simulated environments to real world, ensuring the adaptability of virtual guidance in practical scenarios. We evaluate and compare the proposed method against a non-visual guidance baseline through detailed experiments in simulation. The experimental results demonstrate that the proposed virtual guidance approach outperforms the baseline methods across multiple scenarios and offers clear evidence of its effectiveness in autonomous navigation tasks.
View on arXiv@article{yang2025_2303.02731, title={ Virtual Guidance as a Mid-level Representation for Navigation with Augmented Reality }, author={ Hsuan-Kung Yang and Tsung-Chih Chiang and Jou-Min Liu and Ting-Ru Liu and Chun-Wei Huang and Tsu-Ching Hsiao and Chun-Yi Lee }, journal={arXiv preprint arXiv:2303.02731}, year={ 2025 } }