55
0

Imitation Learning from a Single Temporally Misaligned Video

Abstract

We examine the problem of learning sequential tasks from a single visual demonstration. A key challenge arises when demonstrations are temporally misaligned due to variations in timing, differences in embodiment, or inconsistencies in execution. Existing approaches treat imitation as a distribution-matching problem, aligning individual frames between the agent and the demonstration. However, we show that such frame-level matching fails to enforce temporal ordering or ensure consistent progress. Our key insight is that matching should instead be defined at the level of sequences. We propose that perfect matching occurs when one sequence successfully covers all the subgoals in the same order as the other sequence. We present ORCA (ORdered Coverage Alignment), a dense per-timestep reward function that measures the probability of the agent covering demonstration frames in the correct order. On temporally misaligned demonstrations, we show that agents trained with the ORCA reward achieve 4.54.5x improvement (0.110.500.11 \rightarrow 0.50 average normalized returns) for Meta-world tasks and 6.66.6x improvement (6.5543.36.55 \rightarrow 43.3 average returns) for Humanoid-v4 tasks compared to the best frame-level matching algorithms. We also provide empirical analysis showing that ORCA is robust to varying levels of temporal misalignment. Our code is available atthis https URL

View on arXiv
@article{huey2025_2502.05397,
  title={ Imitation Learning from a Single Temporally Misaligned Video },
  author={ William Huey and Huaxiaoyue Wang and Anne Wu and Yoav Artzi and Sanjiban Choudhury },
  journal={arXiv preprint arXiv:2502.05397},
  year={ 2025 }
}
Comments on this paper