ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06866
36
0

GraspClutter6D: A Large-scale Real-world Dataset for Robust Perception and Grasping in Cluttered Scenes

9 April 2025
S. Back
J. Lee
Kangmin Kim
Heeseon Rho
Geonhyup Lee
R. Kang
S. Lee
Sangjun Noh
Youngjin Lee
Taeyeop Lee
K. Lee
    3DV
ArXivPDFHTML
Abstract

Robust grasping in cluttered environments remains an open challenge in robotics. While benchmark datasets have significantly advanced deep learning methods, they mainly focus on simplistic scenes with light occlusion and insufficient diversity, limiting their applicability to practical scenarios. We present GraspClutter6D, a large-scale real-world grasping dataset featuring: (1) 1,000 highly cluttered scenes with dense arrangements (14.1 objects/scene, 62.6\% occlusion), (2) comprehensive coverage across 200 objects in 75 environment configurations (bins, shelves, and tables) captured using four RGB-D cameras from multiple viewpoints, and (3) rich annotations including 736K 6D object poses and 9.3B feasible robotic grasps for 52K RGB-D images. We benchmark state-of-the-art segmentation, object pose estimation, and grasping detection methods to provide key insights into challenges in cluttered environments. Additionally, we validate the dataset's effectiveness as a training resource, demonstrating that grasping networks trained on GraspClutter6D significantly outperform those trained on existing datasets in both simulation and real-world experiments. The dataset, toolkit, and annotation tools are publicly available on our project website:this https URL.

View on arXiv
@article{back2025_2504.06866,
  title={ GraspClutter6D: A Large-scale Real-world Dataset for Robust Perception and Grasping in Cluttered Scenes },
  author={ Seunghyeok Back and Joosoon Lee and Kangmin Kim and Heeseon Rho and Geonhyup Lee and Raeyoung Kang and Sangbeom Lee and Sangjun Noh and Youngjin Lee and Taeyeop Lee and Kyoobin Lee },
  journal={arXiv preprint arXiv:2504.06866},
  year={ 2025 }
}
Comments on this paper