FrontierNet: Learning Visual Cues to Explore

Exploration of unknown environments is crucial for autonomous robots; it allows them to actively reason and decide on what new data to acquire for different tasks, such as mapping, object discovery, and environmental assessment. Existing solutions, such as frontier-based exploration approaches, rely heavily on 3D map operations, which are limited by map quality and, more critically, often overlook valuable context from visual cues. This work aims at leveraging 2D visual cues for efficient autonomous exploration, addressing the limitations of extracting goal poses from a 3D map. We propose a visual-only frontier-based exploration system, with FrontierNet as its core component. FrontierNet is a learning-based model that (i) proposes frontiers, and (ii) predicts their information gain, from posed RGB images enhanced by monocular depth priors. Our approach provides an alternative to existing 3D-dependent goal-extraction approaches, achieving a 15\% improvement in early-stage exploration efficiency, as validated through extensive simulations and real-world experiments. The project is available atthis https URL.
View on arXiv@article{sun2025_2501.04597, title={ FrontierNet: Learning Visual Cues to Explore }, author={ Boyang Sun and Hanzhi Chen and Stefan Leutenegger and Cesar Cadena and Marc Pollefeys and Hermann Blum }, journal={arXiv preprint arXiv:2501.04597}, year={ 2025 } }