32
0

Robust Offline Imitation Learning Through State-level Trajectory Stitching

Abstract

Imitation learning (IL) has proven effective for enabling robots to acquire visuomotor skills through expert demonstrations. However, traditional IL methods are limited by their reliance on high-quality, often scarce, expert data, and suffer from covariate shift. To address these challenges, recent advances in offline IL have incorporated suboptimal, unlabeled datasets into the training. In this paper, we propose a novel approach to enhance policy learning from mixed-quality offline datasets by leveraging task-relevant trajectory fragments and rich environmental dynamics. Specifically, we introduce a state-based search framework that stitches state-action pairs from imperfect demonstrations, generating more diverse and informative training trajectories. Experimental results on standard IL benchmarks and real-world robotic tasks showcase that our proposed method significantly improves both generalization and performance.

View on arXiv
@article{wang2025_2503.22524,
  title={ Robust Offline Imitation Learning Through State-level Trajectory Stitching },
  author={ Shuze Wang and Yunpeng Mei and Hongjie Cao and Yetian Yuan and Gang Wang and Jian Sun and Jie Chen },
  journal={arXiv preprint arXiv:2503.22524},
  year={ 2025 }
}
Comments on this paper