Learning Visual-Audio Representations for Voice-Controlled Robots
- SSL
Inspired by sensorimotor theory, we propose a novel pipeline for task-oriented voice-controlled robots. Previous method relies on a large amount of labels as well as task-specific reward functions. Not only can such an approach hardly be improved after the deployment, but also has limited generalization across robotic platforms and tasks. To address these problems, we learn a visual-audio representation (VAR) that associates images and sound commands with minimal supervision. Using this representation, we generate an intrinsic reward function to learn robot policies with reinforcement learning, which eliminates the laborious reward engineering process. We demonstrate our approach on various robotic platforms, where the robots hear an audio command, identify the associated target object, and perform precise control to fulfill the sound command. We show that our method outperforms previous work across various sound types and robotic tasks even with fewer amount of labels. We successfully deploy the policy learned in a simulator to a real Kinova Gen3. We also demonstrate that our VAR and the intrinsic reward function allows the robot to improve itself using only a small amount of labeled data collected in the real world.
View on arXiv