14
0

Hierarchical Attention Networks for Lossless Point Cloud Attribute Compression

Abstract

In this paper, we propose a deep hierarchical attention context model for lossless attribute compression of point clouds, leveraging a multi-resolution spatial structure and residual learning. A simple and effective Level of Detail (LoD) structure is introduced to yield a coarse-to-fine representation. To enhance efficiency, points within the same refinement level are encoded in parallel, sharing a common context point group. By hierarchically aggregating information from neighboring points, our attention model learns contextual dependencies across varying scales and densities, enabling comprehensive feature extraction. We also adopt normalization for position coordinates and attributes to achieve scale-invariant compression. Additionally, we segment the point cloud into multiple slices to facilitate parallel processing, further optimizing time complexity. Experimental results demonstrate that the proposed method offers better coding performance than the latest G-PCC for color and reflectance attributes while maintaining more efficient encoding and decoding runtimes.

View on arXiv
@article{chen2025_2504.00481,
  title={ Hierarchical Attention Networks for Lossless Point Cloud Attribute Compression },
  author={ Yueru Chen and Wei Zhang and Dingquan Li and Jing Wang and Ge Li },
  journal={arXiv preprint arXiv:2504.00481},
  year={ 2025 }
}
Comments on this paper