186
v1v2v3 (latest)

SPIN\texttt{SPIN}: distilling Skill-RRT\texttt{Skill-RRT} for long-horizon prehensile and non-prehensile manipulation

Main:8 Pages
10 Figures
Bibliography:6 Pages
35 Tables
Appendix:26 Pages
Abstract

Current robots struggle with long-horizon manipulation tasks requiring sequences of prehensile and non-prehensile skills, contact-rich interactions, and long-term reasoning. We present SPIN\texttt{SPIN} (S\textbf{S}kill P\textbf{P}lanning to IN\textbf{IN}ference), a framework that distills a computationally intensive planning algorithm into a policy via imitation learning. We propose Skill-RRT\texttt{Skill-RRT}, an extension of RRT that incorporates skill applicability checks and intermediate object pose sampling for solving such long-horizon problems. To chain independently trained skills, we introduce connectors\textit{connectors}, goal-conditioned policies trained to minimize object disturbance during transitions. High-quality demonstrations are generated with Skill-RRT\texttt{Skill-RRT} and distilled through noise-based replay in order to reduce online computation time. The resulting policy, trained entirely in simulation, transfers zero-shot to the real world and achieves over 80% success across three challenging long-horizon manipulation tasks and outperforms state-of-the-art hierarchical RL and planning methods.

View on arXiv
Comments on this paper