18
0

ZeroVO: Visual Odometry with Minimal Assumptions

Main:8 Pages
3 Figures
Bibliography:4 Pages
16 Tables
Appendix:12 Pages
Abstract

We introduce ZeroVO, a novel visual odometry (VO) algorithm that achieves zero-shot generalization across diverse cameras and environments, overcoming limitations in existing methods that depend on predefined or static camera calibration setups. Our approach incorporates three main innovations. First, we design a calibration-free, geometry-aware network structure capable of handling noise in estimated depth and camera parameters. Second, we introduce a language-based prior that infuses semantic information to enhance robust feature extraction and generalization to previously unseen domains. Third, we develop a flexible, semi-supervised training paradigm that iteratively adapts to new scenes using unlabeled data, further boosting the models' ability to generalize across diverse real-world scenarios. We analyze complex autonomous driving contexts, demonstrating over 30% improvement against prior methods on three standard benchmarks, KITTI, nuScenes, and Argoverse 2, as well as a newly introduced, high-fidelity synthetic dataset derived from Grand Theft Auto (GTA). By not requiring fine-tuning or camera calibration, our work broadens the applicability of VO, providing a versatile solution for real-world deployment at scale.

View on arXiv
@article{lai2025_2506.08005,
  title={ ZeroVO: Visual Odometry with Minimal Assumptions },
  author={ Lei Lai and Zekai Yin and Eshed Ohn-Bar },
  journal={arXiv preprint arXiv:2506.08005},
  year={ 2025 }
}
Comments on this paper