25
0

Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration

Abstract

Teaching robots dexterous manipulation skills often requires collecting hundreds of demonstrations using wearables or teleoperation, a process that is challenging to scale. Videos of human-object interactions are easier to collect and scale, but leveraging them directly for robot learning is difficult due to the lack of explicit action labels from videos and morphological differences between robot and human hands. We propose Human2Sim2Robot, a novel real-to-sim-to-real framework for training dexterous manipulation policies using only one RGB-D video of a human demonstrating a task. Our method utilizes reinforcement learning (RL) in simulation to cross the human-robot embodiment gap without relying on wearables, teleoperation, or large-scale data collection typically necessary for imitation learning methods. From the demonstration, we extract two task-specific components: (1) the object pose trajectory to define an object-centric, embodiment-agnostic reward function, and (2) the pre-manipulation hand pose to initialize and guide exploration during RL training. We found that these two components are highly effective for learning the desired task, eliminating the need for task-specific reward shaping and tuning. We demonstrate that Human2Sim2Robot outperforms object-aware open-loop trajectory replay by 55% and imitation learning with data augmentation by 68% across grasping, non-prehensile manipulation, and multi-step tasks. Project Site:this https URL

View on arXiv
@article{lum2025_2504.12609,
  title={ Crossing the Human-Robot Embodiment Gap with Sim-to-Real RL using One Human Demonstration },
  author={ Tyler Ga Wei Lum and Olivia Y. Lee and C. Karen Liu and Jeannette Bohg },
  journal={arXiv preprint arXiv:2504.12609},
  year={ 2025 }
}
Comments on this paper