Pixel Motion as Universal Representation for Robot Control

We present LangToMo, a vision-language-action framework structured as a dual-system architecture that uses pixel motion forecasts as intermediate representations. Our high-level System 2, an image diffusion model, generates text-conditioned pixel motion sequences from a single frame to guide robot control. Pixel motion-a universal, interpretable, and motion-centric representation-can be extracted from videos in a self-supervised manner, enabling diffusion model training on web-scale video-caption data. Treating generated pixel motion as learned universal representations, our low level System 1 module translates these into robot actions via motion-to-action mapping functions, which can be either hand-crafted or learned with minimal supervision. System 2 operates as a high-level policy applied at sparse temporal intervals, while System 1 acts as a low-level policy at dense temporal intervals. This hierarchical decoupling enables flexible, scalable, and generalizable robot control under both unsupervised and supervised settings, bridging the gap between language, motion, and action. Checkoutthis https URLfor visualizations.
View on arXiv@article{ranasinghe2025_2505.07817, title={ Pixel Motion as Universal Representation for Robot Control }, author={ Kanchana Ranasinghe and Xiang Li and Cristina Mata and Jongwoo Park and Michael S Ryoo }, journal={arXiv preprint arXiv:2505.07817}, year={ 2025 } }