27
7

Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields

Abstract

Recently, the editing of neural radiance fields (NeRFs) has gained considerable attention, but most prior works focus on static scenes while research on the appearance editing of dynamic scenes is relatively lacking. In this paper, we propose a novel framework to edit the local appearance of dynamic NeRFs by manipulating pixels in a single frame of training video. Specifically, to locally edit the appearance of dynamic NeRFs while preserving unedited regions, we introduce a local surface representation of the edited region, which can be inserted into and rendered along with the original NeRF and warped to arbitrary other frames through a learned invertible motion representation network. By employing our method, users without professional expertise can easily add desired content to the appearance of a dynamic scene. We extensively evaluate our approach on various scenes and show that our approach achieves spatially and temporally consistent editing results. Notably, our approach is versatile and applicable to different variants of dynamic NeRF representations.

View on arXiv
@article{zhang2025_2307.12909,
  title={ Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields },
  author={ Shangzan Zhang and Sida Peng and Yinji ShenTu and Qing Shuai and Tianrun Chen and Kaicheng Yu and Hujun Bao and Xiaowei Zhou },
  journal={arXiv preprint arXiv:2307.12909},
  year={ 2025 }
}
Comments on this paper