27

DuoMo: Dual Motion Diffusion for World-Space Human Reconstruction

Yufu Wang
Evonne Ng
Soyong Shin
Rawal Khirodkar
Yuan Dong
Zhaoen Su
Jinhyung Park
Kris Kitani
Alexander Richard
Fabian Prada
Michael Zollhofer
Main:8 Pages
12 Figures
Bibliography:4 Pages
6 Tables
Appendix:3 Pages
Abstract

We present DuoMo, a generative method that recovers human motion in world-space coordinates from unconstrained videos with noisy or incomplete observations. Reconstructing such motion requires solving a fundamental trade-off: generalizing from diverse and noisy video inputs while maintaining global motion consistency. Our approach addresses this problem by factorizing motion learning into two diffusion models. The camera-space model first estimates motion from videos in camera coordinates. The world-space model then lifts this initial estimate into world coordinates and refines it to be globally consistent. Together, the two models can reconstruct motion across diverse scenes and trajectories, even from highly noisy or incomplete observations. Moreover, our formulation is general, generating the motion of mesh vertices directly and bypassing parametric models. DuoMo achieves state-of-the-art performance. On EMDB, our method obtains a 16% reduction in world-space reconstruction error while maintaining low foot skating. On RICH, it obtains a 30% reduction in world-space error. Project page:this https URL

View on arXiv
Comments on this paper