34
0

Unified Geometry and Color Compression Framework for Point Clouds via Generative Diffusion Priors

Abstract

With the growth of 3D applications and the rapid increase in sensor-collected 3D point cloud data, there is a rising demand for efficient compression algorithms. Most existing learning-based compression methods handle geometry and color attributes separately, treating them as distinct tasks, making these methods challenging to apply directly to point clouds with colors. Besides, the limited capacities of training datasets also limit their generalizability across points with different distributions. In this work, we introduce a test-time unified geometry and color compression framework of 3D point clouds. Instead of training a compression model based on specific datasets, we adapt a pre-trained generative diffusion model to compress original colored point clouds into sparse sets, termed 'seeds', using prompt tuning. Decompression is then achieved through multiple denoising steps with separate sampling processes. Experiments on objects and indoor scenes demonstrate that our method has superior performances compared to existing baselines for the compression of geometry and color.

View on arXiv
@article{huang2025_2503.18083,
  title={ Unified Geometry and Color Compression Framework for Point Clouds via Generative Diffusion Priors },
  author={ Tianxin Huang and Gim Hee Lee },
  journal={arXiv preprint arXiv:2503.18083},
  year={ 2025 }
}
Comments on this paper