22
0

Toward Aligning Human and Robot Actions via Multi-Modal Demonstration Learning

Abstract

Understanding action correspondence between humans and robots is essential for evaluating alignment in decision-making, particularly in human-robot collaboration and imitation learning within unstructured environments. We propose a multimodal demonstration learning framework that explicitly models human demonstrations from RGB video with robot demonstrations in voxelized RGB-D space. Focusing on the "pick and place" task from the RH20T dataset, we utilize data from 5 users across 10 diverse scenes. Our approach combines ResNet-based visual encoding for human intention modeling and a Perceiver Transformer for voxel-based robot action prediction. After 2000 training epochs, the human model reaches 71.67% accuracy, and the robot model achieves 71.8% accuracy, demonstrating the framework's potential for aligning complex, multimodal human and robot behaviors in manipulation tasks.

View on arXiv
@article{zahid2025_2504.11493,
  title={ Toward Aligning Human and Robot Actions via Multi-Modal Demonstration Learning },
  author={ Azizul Zahid and Jie Fan and Farong Wang and Ashton Dy and Sai Swaminathan and Fei Liu },
  journal={arXiv preprint arXiv:2504.11493},
  year={ 2025 }
}
Comments on this paper