156

The Effect of Viewpoint on Grasp Detection

Abstract

Many robots today are capable controlling the pose of their visual sensors. A natural problem that these systems face is how to select a pose for their sensor that enables them to perform their task with high reliability. In this work we study the issue of sensor pose selection where the robot's task is to detect or recognize grasps in visual data. This is an important issue because the reliability of the state-of-the-art in robotic grasping systems is not 100% in situations where the object instances are unknown a priori and where the objects are presented in clutter. A large part of the failures observed in these systems are perceptual errors. In this paper we show that viewpoint does indeed affect grasp detection performance in terms of number of grasps found, accuracy, and detector confidence. Further, we propose a simple method for learning viewpoints that results in finding 4.4 times more correct grasps than a random viewpoint baseline. Finally, we evaluate these ideas on a cluttered, table-top scenario with a diverse set of objects using a Baxter manipulator robot. The robot's grasp success rate increases by 3-5% when relying on our viewpoint selection method.

View on arXiv
Comments on this paper