ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.14789
75
0

Structurally Disentangled Feature Fields Distillation for 3D Understanding and Editing

21 February 2025
Yoel Levy
David Shavin
Itai Lang
Sagie Benaim
ArXivPDFHTML
Abstract

Recent work has demonstrated the ability to leverage or distill pre-trained 2D features obtained using large pre-trained 2D models into 3D features, enabling impressive 3D editing and understanding capabilities using only 2D supervision. Although impressive, models assume that 3D features are captured using a single feature field and often make a simplifying assumption that features are view-independent. In this work, we propose instead to capture 3D features using multiple disentangled feature fields that capture different structural components of 3D features involving view-dependent and view-independent components, which can be learned from 2D feature supervision only. Subsequently, each element can be controlled in isolation, enabling semantic and structural understanding and editing capabilities. For instance, using a user click, one can segment 3D features corresponding to a given object and then segment, edit, or remove their view-dependent (reflective) properties. We evaluate our approach on the task of 3D segmentation and demonstrate a set of novel understanding and editing tasks.

View on arXiv
@article{levy2025_2502.14789,
  title={ Structurally Disentangled Feature Fields Distillation for 3D Understanding and Editing },
  author={ Yoel Levy and David Shavin and Itai Lang and Sagie Benaim },
  journal={arXiv preprint arXiv:2502.14789},
  year={ 2025 }
}
Comments on this paper