ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.01535
30
0

GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians

2 October 2024
Shuyi Jiang
QiHao Zhao
Hossein Rahmani
De Wen Soh
J. Liu
Na Zhao
    3DGS
ArXivPDFHTML
Abstract

Recently, with the development of Neural Radiance Fields and Gaussian Splatting, 3D reconstruction techniques have achieved remarkably high fidelity. However, the latent representations learnt by these methods are highly entangled and lack interpretability. In this paper, we propose a novel part-aware compositional reconstruction method, called GaussianBlock, that enables semantically coherent and disentangled representations, allowing for precise and physical editing akin to building blocks, while simultaneously maintaining high fidelity. Our GaussianBlock introduces a hybrid representation that leverages the advantages of both primitives, known for their flexible actionability and editability, and 3D Gaussians, which excel in reconstruction quality. Specifically, we achieve semantically coherent primitives through a novel attention-guided centering loss derived from 2D semantic priors, complemented by a dynamic splitting and fusion strategy. Furthermore, we utilize 3D Gaussians that hybridize with primitives to refine structural details and enhance fidelity. Additionally, a binding inheritance strategy is employed to strengthen and maintain the connection between the two. Our reconstructed scenes are evidenced to be disentangled, compositional, and compact across diverse benchmarks, enabling seamless, direct and precise editing while maintaining high quality.

View on arXiv
@article{jiang2025_2410.01535,
  title={ GaussianBlock: Building Part-Aware Compositional and Editable 3D Scene by Primitives and Gaussians },
  author={ Shuyi Jiang and Qihao Zhao and Hossein Rahmani and De Wen Soh and Jun Liu and Na Zhao },
  journal={arXiv preprint arXiv:2410.01535},
  year={ 2025 }
}
Comments on this paper