0
v1v2 (latest)

Efficiently Reconstructing Dynamic Scenes One D4RT at a Time

Chuhan Zhang
Guillaume Le Moing
Skanda Koppula
Ignacio Rocco
Liliane Momeni
Junyu Xie
Shuyang Sun
Rahul Sukthankar
Joëlle K. Barral
Raia Hadsell
Zoubin Ghahramani
Andrew Zisserman
Junlin Zhang
Mehdi S. M. Sajjadi
Main:8 Pages
14 Figures
Bibliography:3 Pages
11 Tables
Appendix:6 Pages
Abstract

Understanding and reconstructing the complex geometry and motion of dynamic scenes from video remains a formidable challenge in computer vision. This paper introduces D4RT, a simple yet powerful feedforward model designed to efficiently solve this task. D4RT utilizes a unified transformer architecture to jointly infer depth, spatio-temporal correspondence, and full camera parameters from a single video. Its core innovation is a novel querying mechanism that sidesteps the heavy computation of dense, per-frame decoding and the complexity of managing multiple, task-specific decoders. Our decoding interface allows the model to independently and flexibly probe the 3D position of any point in space and time. The result is a lightweight and highly scalable method that enables remarkably efficient training and inference. We demonstrate that our approach sets a new state of the art, outperforming previous methods across a wide spectrum of 4D reconstruction tasks. We refer to the project webpage for animated results:this https URL.

View on arXiv
Comments on this paper