ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.07742
58
0

Efficient 3D Perception on Multi-Sweep Point Cloud with Gumbel Spatial Pruning

12 November 2024
Jianhao Li
Tianyu Sun
X. Zhang
Z. Wang
Bailan Feng
Hengshuang Zhao
    3DPC
ArXivPDFHTML
Abstract

This paper studies point cloud perception within outdoor environments. Existing methods face limitations in recognizing objects located at a distance or occluded, due to the sparse nature of outdoor point clouds. In this work, we observe a significant mitigation of this problem by accumulating multiple temporally consecutive point cloud sweeps, resulting in a remarkable improvement in perception accuracy. However, the computation cost also increases, hindering previous approaches from utilizing a large number of point cloud sweeps. To tackle this challenge, we find that a considerable portion of points in the accumulated point cloud is redundant, and discarding these points has minimal impact on perception accuracy. We introduce a simple yet effective Gumbel Spatial Pruning (GSP) layer that dynamically prunes points based on a learned end-to-end sampling. The GSP layer is decoupled from other network components and thus can be seamlessly integrated into existing point cloud network architectures. Without incurring additional computational overhead, we increase the number of point cloud sweeps from 10, a common practice, to as many as 40. Consequently, there is a significant enhancement in perception performance. For instance, in nuScenes 3D object detection and BEV map segmentation tasks, our pruning strategy improves several 3D perception baseline methods.

View on arXiv
@article{sun2025_2411.07742,
  title={ Efficient 3D Perception on Multi-Sweep Point Cloud with Gumbel Spatial Pruning },
  author={ Tianyu Sun and Jianhao Li and Xueqian Zhang and Zhongdao Wang and Bailan Feng and Hengshuang Zhao },
  journal={arXiv preprint arXiv:2411.07742},
  year={ 2025 }
}
Comments on this paper