ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.15928
24
5

AO-Grasp: Articulated Object Grasp Generation

24 October 2023
Carlota Parés-Morlans
Claire Chen
Yijia Weng
Michelle Yi
Yuying Huang
Nick Heppert
Linqi Zhou
Leonidas J. Guibas
Jeannette Bohg
ArXivPDFHTML
Abstract

We introduce AO-Grasp, a grasp proposal method that generates 6 DoF grasps that enable robots to interact with articulated objects, such as opening and closing cabinets and appliances. AO-Grasp consists of two main contributions: the AO-Grasp Model and the AO-Grasp Dataset. Given a segmented partial point cloud of a single articulated object, the AO-Grasp Model predicts the best grasp points on the object with an Actionable Grasp Point Predictor. Then, it finds corresponding grasp orientations for each of these points, resulting in stable and actionable grasp proposals. We train the AO-Grasp Model on our new AO-Grasp Dataset, which contains 78K actionable parallel-jaw grasps on synthetic articulated objects. In simulation, AO-Grasp achieves a 45.0 % grasp success rate, whereas the highest performing baseline achieves a 35.0% success rate. Additionally, we evaluate AO-Grasp on 120 real-world scenes of objects with varied geometries, articulation axes, and joint states, where AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline only produces successful grasps on 33.3% of scenes. To the best of our knowledge, AO-Grasp is the first method for generating 6 DoF grasps on articulated objects directly from partial point clouds without requiring part detection or hand-designed grasp heuristics. Project website:this https URL

View on arXiv
@article{morlans2025_2310.15928,
  title={ AO-Grasp: Articulated Object Grasp Generation },
  author={ Carlota Parés Morlans and Claire Chen and Yijia Weng and Michelle Yi and Yuying Huang and Nick Heppert and Linqi Zhou and Leonidas Guibas and Jeannette Bohg },
  journal={arXiv preprint arXiv:2310.15928},
  year={ 2025 }
}
Comments on this paper