CD-NGP: A Fast Scalable Continual Representation for Dynamic Scenes
- 3DV3DGS

Main:10 Pages
15 Figures
Bibliography:3 Pages
19 Tables
Appendix:5 Pages
Abstract
We present CD-NGP, which is a fast and scalable representation for 3D reconstruction and novel view synthesis in dynamic scenes. Inspired by continual learning, our method first segments input videos into multiple chunks, followed by training the model chunk by chunk, and finally, fuses features of the first branch and subsequent branches. Experiments on the prevailing DyNeRF dataset demonstrate that our proposed novel representation reaches a great balance between memory consumption, model size, training speed, and rendering quality. Specifically, our method consumes less training memory (GB) than offline methods and requires significantly lower streaming bandwidth (MB/frame) than other online alternatives.
View on arXivComments on this paper
