CityWalker: Learning Embodied Urban Navigation from Web-Scale Videos

Navigating dynamic urban environments presents significant challenges for embodied agents, requiring advanced spatial reasoning and adherence to common-sense norms. Despite progress, existing visual navigation methods struggle in map-free or off-street settings, limiting the deployment of autonomous agents like last-mile delivery robots. To overcome these obstacles, we propose a scalable, data-driven approach for human-like urban navigation by training agents on thousands of hours of in-the-wild city walking and driving videos sourced from the web. We introduce a simple and scalable data processing pipeline that extracts action supervision from these videos, enabling large-scale imitation learning without costly annotations. Our model learns sophisticated navigation policies to handle diverse challenges and critical scenarios. Experimental results show that training on large-scale, diverse datasets significantly enhances navigation performance, surpassing current methods. This work shows the potential of using abundant online video data to develop robust navigation policies for embodied agents in dynamic urban settings. Project homepage is atthis https URL.
View on arXiv@article{liu2025_2411.17820, title={ CityWalker: Learning Embodied Urban Navigation from Web-Scale Videos }, author={ Xinhao Liu and Jintong Li and Yicheng Jiang and Niranjan Sujay and Zhicheng Yang and Juexiao Zhang and John Abanes and Jing Zhang and Chen Feng }, journal={arXiv preprint arXiv:2411.17820}, year={ 2025 } }