ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.19581
29
58

SAMBLE: Shape-Specific Point Cloud Sampling for an Optimal Trade-Off Between Local Detail and Global Uniformity

28 April 2025
Chengzhi Wu
Yuxin Wan
Hao Fu
Julius Pfrommer
Zeyun Zhong
Junwei Zheng
Jiaming Zhang
Jürgen Beyerer
    3DPC
ArXivPDFHTML
Abstract

Driven by the increasing demand for accurate and efficient representation of 3D data in various domains, point cloud sampling has emerged as a pivotal research topic in 3D computer vision. Recently, learning-to-sample methods have garnered growing interest from the community, particularly for their ability to be jointly trained with downstream tasks. However, previous learning-based sampling methods either lead to unrecognizable sampling patterns by generating a new point cloud or biased sampled results by focusing excessively on sharp edge details. Moreover, they all overlook the natural variations in point distribution across different shapes, applying a similar sampling strategy to all point clouds. In this paper, we propose a Sparse Attention Map and Bin-based Learning method (termed SAMBLE) to learn shape-specific sampling strategies for point cloud shapes. SAMBLE effectively achieves an improved balance between sampling edge points for local details and preserving uniformity in the global shape, resulting in superior performance across multiple common point cloud downstream tasks, even in scenarios with few-point sampling.

View on arXiv
@article{wu2025_2504.19581,
  title={ SAMBLE: Shape-Specific Point Cloud Sampling for an Optimal Trade-Off Between Local Detail and Global Uniformity },
  author={ Chengzhi Wu and Yuxin Wan and Hao Fu and Julius Pfrommer and Zeyun Zhong and Junwei Zheng and Jiaming Zhang and Jürgen Beyerer },
  journal={arXiv preprint arXiv:2504.19581},
  year={ 2025 }
}
Comments on this paper