ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.16740
25
0

Gaussian Splatting is an Effective Data Generator for 3D Object Detection

23 April 2025
F. G. Zanjani
Davide Abati
Auke Wiggers
Dimitris Kalatzis
Jens Petersen
Hong Cai
A. Habibian
    3DGS
ArXivPDFHTML
Abstract

We investigate data augmentation for 3D object detection in autonomous driving. We utilize recent advancements in 3D reconstruction based on Gaussian Splatting for 3D object placement in driving scenes. Unlike existing diffusion-based methods that synthesize images conditioned on BEV layouts, our approach places 3D objects directly in the reconstructed 3D space with explicitly imposed geometric transformations. This ensures both the physical plausibility of object placement and highly accurate 3D pose and position annotations.Our experiments demonstrate that even by integrating a limited number of external 3D objects into real scenes, the augmented data significantly enhances 3D object detection performance and outperforms existing diffusion-based 3D augmentation for object detection. Extensive testing on the nuScenes dataset reveals that imposing high geometric diversity in object placement has a greater impact compared to the appearance diversity of objects. Additionally, we show that generating hard examples, either by maximizing detection loss or imposing high visual occlusion in camera images, does not lead to more efficient 3D data augmentation for camera-based 3D object detection in autonomous driving.

View on arXiv
@article{zanjani2025_2504.16740,
  title={ Gaussian Splatting is an Effective Data Generator for 3D Object Detection },
  author={ Farhad G. Zanjani and Davide Abati and Auke Wiggers and Dimitris Kalatzis and Jens Petersen and Hong Cai and Amirhossein Habibian },
  journal={arXiv preprint arXiv:2504.16740},
  year={ 2025 }
}
Comments on this paper