408

Category Level Pick and Place Using Deep Reinforcement Learning

Abstract

This paper proposes a new formulation of robotic pick and place. We formulate pick and place as a deep RL problem where the actions are grasp and place poses for the robot's hand, and the state is encoded with the observed geometry local to a selected grasp. This framework is well-suited to learning pick and place tasks involving novel objects in clutter. We present experiments demonstrating that our method works well on a new variant of pick and place tasks which we call category level pick and place where the category of the object to be manipulated is known but its exact appearance and geometry is unknown. The results show, even though these are novel objects and they are presented in clutter, our method can still grasp, re-grasp, and place them in a desired pose with high probability.

View on arXiv
Comments on this paper