GoalGrasp: Grasping Goals in Partially Occluded Scenarios without Grasp Training

Grasping user-specified objects is crucial for robotic assistants; however, most current 6-DoF grasp detection methods are object-agnostic, making it challenging to grasp specific targets from a scene. To achieve that, we present GoalGrasp, a simple yet effective 6-DoF robot grasp pose detection method that does not rely on grasp pose annotations and grasp training. By combining 3D bounding boxes and simple human grasp priors, our method introduces a novel paradigm for robot grasp pose detection. GoalGrasp's novelty is its swift grasping of user-specified objects and partial mitigation of occlusion issues. The experimental evaluation involves 18 common objects categorized into 7 classes. Our method generates dense grasp poses for 1000 scenes. We compare our method's grasp poses to existing approaches using a novel stability metric, demonstrating significantly higher grasp pose stability. In user-specified robot grasping tests, our method achieves a 94% success rate, and 92% under partial occlusion.
View on arXiv@article{gui2025_2405.04783, title={ GoalGrasp: Grasping Goals in Partially Occluded Scenarios without Grasp Training }, author={ Shun Gui and Kai Gui and Yan Luximon }, journal={arXiv preprint arXiv:2405.04783}, year={ 2025 } }