22
12

DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning

Abstract

The ability to predict future outcomes given control actions is fundamental for physical reasoning. However, such predictive models, often called world models, remains challenging to learn and are typically developed for task-specific solutions with online policy learning. To unlock world models' true potential, we argue that they should 1) be trainable on offline, pre-collected trajectories, 2) support test-time behavior optimization, and 3) facilitate task-agnostic reasoning. To this end, we present DINO World Model (DINO-WM), a new method to model visual dynamics without reconstructing the visual world. DINO-WM leverages spatial patch features pre-trained with DINOv2, enabling it to learn from offline behavioral trajectories by predicting future patch features. This allows DINO-WM to achieve observational goals through action sequence optimization, facilitating task-agnostic planning by treating goal features as prediction targets. We demonstrate that DINO-WM achieves zero-shot behavioral solutions at test time on six environments without expert demonstrations, reward modeling, or pre-learned inverse models, outperforming prior state-of-the-art work across diverse task families such as arbitrarily configured mazes, push manipulation with varied object shapes, and multi-particle scenarios.

View on arXiv
@article{zhou2025_2411.04983,
  title={ DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning },
  author={ Gaoyue Zhou and Hengkai Pan and Yann LeCun and Lerrel Pinto },
  journal={arXiv preprint arXiv:2411.04983},
  year={ 2025 }
}
Comments on this paper