ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.18276
81
1

GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation

27 November 2024
Wenbo Cui
Chengyang Zhao
Songlin Wei
Jiazhao Zhang
Haoran Geng
Yaran Chen
H. Wang
He Wang
ArXivPDFHTML
Abstract

Effectively manipulating articulated objects in household scenarios is a crucial step toward achieving general embodied artificial intelligence. Mainstream research in 3D vision has primarily focused on manipulation through depth perception and pose detection. However, in real-world environments, these methods often face challenges due to imperfect depth perception, such as with transparent lids and reflective handles. Moreover, they generally lack the diversity in part-based interactions required for flexible and adaptable manipulation. To address these challenges, we introduced a large-scale part-centric dataset for articulated object manipulation that features both photo-realistic material randomization and detailed annotations of part-oriented, scene-level actionable interaction poses. We evaluated the effectiveness of our dataset by integrating it with several state-of-the-art methods for depth estimation and interaction pose prediction. Additionally, we proposed a novel modular framework that delivers superior and robust performance for generalizable articulated object manipulation. Our extensive experiments demonstrate that our dataset significantly improves the performance of depth perception and actionable interaction pose prediction in both simulation and real-world scenarios. More information and demos can be found at:this https URL.

View on arXiv
@article{cui2025_2411.18276,
  title={ GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation },
  author={ Wenbo Cui and Chengyang Zhao and Songlin Wei and Jiazhao Zhang and Haoran Geng and Yaran Chen and Haoran Li and He Wang },
  journal={arXiv preprint arXiv:2411.18276},
  year={ 2025 }
}
Comments on this paper