50
0

ODHSR: Online Dense 3D Reconstruction of Humans and Scenes from Monocular Videos

Abstract

Creating a photorealistic scene and human reconstruction from a single monocular in-the-wild video figures prominently in the perception of a human-centric 3D world. Recent neural rendering advances have enabled holistic human-scene reconstruction but require pre-calibrated camera and human poses, and days of training time. In this work, we introduce a novel unified framework that simultaneously performs camera tracking, human pose estimation and human-scene reconstruction in an online fashion. 3D Gaussian Splatting is utilized to learn Gaussian primitives for humans and scenes efficiently, and reconstruction-based camera tracking and human pose estimation modules are designed to enable holistic understanding and effective disentanglement of pose and appearance. Specifically, we design a human deformation module to reconstruct the details and enhance generalizability to out-of-distribution poses faithfully. Aiming to learn the spatial correlation between human and scene accurately, we introduce occlusion-aware human silhouette rendering and monocular geometric priors, which further improve reconstruction quality. Experiments on the EMDB and NeuMan datasets demonstrate superior or on-par performance with existing methods in camera tracking, human pose estimation, novel view synthesis and runtime. Our project page is atthis https URL.

View on arXiv
@article{zhang2025_2504.13167,
  title={ ODHSR: Online Dense 3D Reconstruction of Humans and Scenes from Monocular Videos },
  author={ Zetong Zhang and Manuel Kaufmann and Lixin Xue and Jie Song and Martin R. Oswald },
  journal={arXiv preprint arXiv:2504.13167},
  year={ 2025 }
}
Comments on this paper