ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16302
32
0

DualNeRF: Text-Driven 3D Scene Editing via Dual-Field Representation

22 February 2025
Yuxuan Xiong
Yue Shi
Yishun Dou
Bingbing Ni
    DiffM
ArXivPDFHTML
Abstract

Recently, denoising diffusion models have achieved promising results in 2D image generation and editing. Instruct-NeRF2NeRF (IN2N) introduces the success of diffusion into 3D scene editing through an "Iterative dataset update" (IDU) strategy. Though achieving fascinating results, IN2N suffers from problems of blurry backgrounds and trapping in local optima. The first problem is caused by IN2N's lack of efficient guidance for background maintenance, while the second stems from the interaction between image editing and NeRF training during IDU. In this work, we introduce DualNeRF to deal with these problems. We propose a dual-field representation to preserve features of the original scene and utilize them as additional guidance to the model for background maintenance during IDU. Moreover, a simulated annealing strategy is embedded into IDU to endow our model with the power of addressing local optima issues. A CLIP-based consistency indicator is used to further improve the editing quality by filtering out low-quality edits. Extensive experiments demonstrate that our method outperforms previous methods both qualitatively and quantitatively.

View on arXiv
@article{xiong2025_2502.16302,
  title={ DualNeRF: Text-Driven 3D Scene Editing via Dual-Field Representation },
  author={ Yuxuan Xiong and Yue Shi and Yishun Dou and Bingbing Ni },
  journal={arXiv preprint arXiv:2502.16302},
  year={ 2025 }
}
Comments on this paper