ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.10040
66
3

Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation

17 February 2025
Shichao Fan
Quantao Yang
Yajie Liu
Kun Wu
Zhengping Che
Qingjie Liu
Min Wan
ArXiv (abs)PDFHTML
Abstract

Recently, Vision-Language-Action models (VLA) have advanced robot imitation learning, but high data collection costs and limited demonstrations hinder generalization and current imitation learning methods struggle in out-of-distribution scenarios, especially for long-horizon tasks. A key challenge is how to mitigate compounding errors in imitation learning, which lead to cascading failures over extended trajectories. To address these challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework, which generates 2D trajectories through a diffusion model to guide policy learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP provides trajectory-level guidance to reduce error accumulation. Our two-stage approach first trains a generative vision-language model to create diffusion-based trajectories, then refines the imitation policy using them. Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art baselines by 25% in success rate, starting from scratch without external pretraining. Moreover, DTP significantly improves real-world robot performance.

View on arXiv
@article{fan2025_2502.10040,
  title={ Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation },
  author={ Shichao Fan and Quantao Yang and Yajie Liu and Kun Wu and Zhengping Che and Qingjie Liu and Min Wan },
  journal={arXiv preprint arXiv:2502.10040},
  year={ 2025 }
}
Main:7 Pages
7 Figures
Bibliography:1 Pages
3 Tables
Comments on this paper