ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22204
29
0

Segment then Splat: A Unified Approach for 3D Open-Vocabulary Segmentation based on Gaussian Splatting

28 March 2025
Yiren Lu
Yunlai Zhou
Yiran Qiao
Chaoda Song
Tuo Liang
Jing Ma
Yu Yin
    3DGS
ArXivPDFHTML
Abstract

Open-vocabulary querying in 3D space is crucial for enabling more intelligent perception in applications such as robotics, autonomous systems, and augmented reality. However, most existing methods rely on 2D pixel-level parsing, leading to multi-view inconsistencies and poor 3D object retrieval. Moreover, they are limited to static scenes and struggle with dynamic scenes due to the complexities of motion modeling. In this paper, we propose Segment then Splat, a 3D-aware open vocabulary segmentation approach for both static and dynamic scenes based on Gaussian Splatting. Segment then Splat reverses the long established approach of "segmentation after reconstruction" by dividing Gaussians into distinct object sets before reconstruction. Once the reconstruction is complete, the scene is naturally segmented into individual objects, achieving true 3D segmentation. This approach not only eliminates Gaussian-object misalignment issues in dynamic scenes but also accelerates the optimization process, as it eliminates the need for learning a separate language field. After optimization, a CLIP embedding is assigned to each object to enable open-vocabulary querying. Extensive experiments on various datasets demonstrate the effectiveness of our proposed method in both static and dynamic scenarios.

View on arXiv
@article{lu2025_2503.22204,
  title={ Segment then Splat: A Unified Approach for 3D Open-Vocabulary Segmentation based on Gaussian Splatting },
  author={ Yiren Lu and Yunlai Zhou and Yiran Qiao and Chaoda Song and Tuo Liang and Jing Ma and Yu Yin },
  journal={arXiv preprint arXiv:2503.22204},
  year={ 2025 }
}
Comments on this paper