Effectively manipulating articulated objects in household scenarios is a crucial step toward achieving general embodied artificial intelligence. Mainstream research in 3D vision has primarily focused on manipulation through depth perception and pose detection. However, in real-world environments, these methods often face challenges due to imperfect depth perception, such as with transparent lids and reflective handles. Moreover, they generally lack the diversity in part-based interactions required for flexible and adaptable manipulation. To address these challenges, we introduced a large-scale part-centric dataset for articulated object manipulation that features both photo-realistic material randomization and detailed annotations of part-oriented, scene-level actionable interaction poses. We evaluated the effectiveness of our dataset by integrating it with several state-of-the-art methods for depth estimation and interaction pose prediction. Additionally, we proposed a novel modular framework that delivers superior and robust performance for generalizable articulated object manipulation. Our extensive experiments demonstrate that our dataset significantly improves the performance of depth perception and actionable interaction pose prediction in both simulation and real-world scenarios. More information and demos can be found at:this https URL.
View on arXiv@article{cui2025_2411.18276, title={ GAPartManip: A Large-scale Part-centric Dataset for Material-Agnostic Articulated Object Manipulation }, author={ Wenbo Cui and Chengyang Zhao and Songlin Wei and Jiazhao Zhang and Haoran Geng and Yaran Chen and Haoran Li and He Wang }, journal={arXiv preprint arXiv:2411.18276}, year={ 2025 } }