Slot-Level Robotic Placement via Visual Imitation from Single Human Video

The majority of modern robot learning methods focus on learning a set of pre-defined tasks with limited or no generalization to new tasks. Extending the robot skillset to novel tasks involves gathering an extensive amount of training data for additional tasks. In this paper, we address the problem of teaching new tasks to robots using human demonstration videos for repetitive tasks (e.g., packing). This task requires understanding the human video to identify which object is being manipulated (the pick object) and where it is being placed (the placement slot). In addition, it needs to re-identify the pick object and the placement slots during inference along with the relative poses to enable robot execution of the task. To tackle this, we propose SLeRP, a modular system that leverages several advanced visual foundation models and a novel slot-level placement detector Slot-Net, eliminating the need for expensive video demonstrations for training. We evaluate our system using a new benchmark of real-world videos. The evaluation results show that SLeRP outperforms several baselines and can be deployed on a real robot.
View on arXiv@article{shan2025_2504.01959, title={ Slot-Level Robotic Placement via Visual Imitation from Single Human Video }, author={ Dandan Shan and Kaichun Mo and Wei Yang and Yu-Wei Chao and David Fouhey and Dieter Fox and Arsalan Mousavian }, journal={arXiv preprint arXiv:2504.01959}, year={ 2025 } }