139

Sequential View Grasp Detection For Inexpensive Robotic Arms

Abstract

In this paper, we consider the idea of improving the performance of grasp detection by viewing an object to be grasped from a series of different perspectives. Grasp detection is an approach to perception for grasping whereby robotic grasp configurations are detected directly from point cloud or RGB sensor data. This paper focuses on the situation where the camera or depth senor is mounted near the robotic hand. In this context, there are at least two ways in which viewpoint can affect grasp performance. First, a "good" viewpoint might enable the robot to detect more/better grasps because it has a better view of graspable parts of an object. Second, by detecting grasps from arm configurations nearby the final grasp configuration, it might be possible to reduce the impact of kinematic modelling errors on the last stage of grasp synthesis just prior to contact. Both of these effects are particularly relevant to inexpensive robotic arms. We evaluate them experimentally both in simulation and on-line with a robot. We find that both of the effects mentioned above exist, but that the second one (reducing kinematic modelling errors) seems to have the most impact in practice.

View on arXiv
Comments on this paper