41
0

Dynamic Point Maps: A Versatile Representation for Dynamic 3D Reconstruction

Abstract

DUSt3R has recently shown that one can reduce many tasks in multi-view geometry, including estimating camera intrinsics and extrinsics, reconstructing the scene in 3D, and establishing image correspondences, to the prediction of a pair of viewpoint-invariant point maps, i.e., pixel-aligned point clouds defined in a common reference frame. This formulation is elegant and powerful, but unable to tackle dynamic scenes. To address this challenge, we introduce the concept of Dynamic Point Maps (DPM), extending standard point maps to support 4D tasks such as motion segmentation, scene flow estimation, 3D object tracking, and 2D correspondence. Our key intuition is that, when time is introduced, there are several possible spatial and time references that can be used to define the point maps. We identify a minimal subset of such combinations that can be regressed by a network to solve the sub tasks mentioned above. We train a DPM predictor on a mixture of synthetic and real data and evaluate it across diverse benchmarks for video depth prediction, dynamic point cloud reconstruction, 3D scene flow and object pose tracking, achieving state-of-the-art performance. Code, models and additional results are available atthis https URL.

View on arXiv
@article{sucar2025_2503.16318,
  title={ Dynamic Point Maps: A Versatile Representation for Dynamic 3D Reconstruction },
  author={ Edgar Sucar and Zihang Lai and Eldar Insafutdinov and Andrea Vedaldi },
  journal={arXiv preprint arXiv:2503.16318},
  year={ 2025 }
}
Comments on this paper