12
0

Exploiting Radiance Fields for Grasp Generation on Novel Synthetic Views

Abstract

Vision based robot manipulation uses cameras to capture one or more images of a scene containing the objects to be manipulated. Taking multiple images can help if any object is occluded from one viewpoint but more visible from another viewpoint. However, the camera has to be moved to a sequence of suitable positions for capturing multiple images, which requires time and may not always be possible, due to reachability constraints. So while additional images can produce more accurate grasp poses due to the extra information available, the time-cost goes up with the number of additional views sampled. Scene representations like Gaussian Splatting are capable of rendering accurate photorealistic virtual images from user-specified novel viewpoints. In this work, we show initial results which indicate that novel view synthesis can provide additional context in generating grasp poses. Our experiments on the Graspnet-1billion dataset show that novel views contributed force-closure grasps in addition to the force-closure grasps obtained from sparsely sampled real views while also improving grasp coverage. In the future we hope this work can be extended to improve grasp extraction from radiance fields constructed with a single input image, using for example diffusion models or generalizable radiance fields.

View on arXiv
@article{kashyap2025_2505.11467,
  title={ Exploiting Radiance Fields for Grasp Generation on Novel Synthetic Views },
  author={ Abhishek Kashyap and Henrik Andreasson and Todor Stoyanov },
  journal={arXiv preprint arXiv:2505.11467},
  year={ 2025 }
}
Comments on this paper