Visuomotor imitation learning enables embodied agents to effectively acquire manipulation skills from video demonstrations and robot proprioception. However, as scene complexity and visual distractions increase, existing methods that perform well in simple scenes tend to degrade in performance. To address this challenge, we introduce Imit Diff, a semanstic guided diffusion transformer with dual resolution fusion for imitation learning. Our approach leverages prior knowledge from vision language foundation models to translate high-level semantic instruction into pixel-level visual localization. This information is explicitly integrated into a multi-scale visual enhancement framework, constructed with a dual resolution encoder. Additionally, we introduce an implementation of Consistency Policy within the diffusion transformer architecture to improve both real-time performance and motion smoothness in embodied agentthis http URLevaluate Imit Diff on several challenging real-world tasks. Due to its task-oriented visual localization and fine-grained scene perception, it significantly outperforms state-of-the-art methods, especially in complex scenes with visual distractions, including zero-shot experiments focused on visual distraction and category generalization. The code will be made publicly available.
View on arXiv@article{dong2025_2502.09649, title={ Imit Diff: Semantics Guided Diffusion Transformer with Dual Resolution Fusion for Imitation Learning }, author={ Yuhang Dong and Haizhou Ge and Yupei Zeng and Jiangning Zhang and Beiwen Tian and Guanzhong Tian and Hongrui Zhu and Yufei Jia and Ruixiang Wang and Ran Yi and Guyue Zhou and Longhua Ma }, journal={arXiv preprint arXiv:2502.09649}, year={ 2025 } }