This paper investigates the generation of realistic full-body human motion using a single head-mounted device with an outward-facing color camera and the ability to perform visual SLAM. To address the ambiguity of this setup, we present HMD^2, a novel system that balances motion reconstruction and generation. From a reconstruction standpoint, it aims to maximally utilize the camera streams to produce both analytical and learned features, including head motion, SLAM point cloud, and image embeddings. On the generative front, HMD^2 employs a multi-modal conditional motion diffusion model with a Transformer backbone to maintain temporal coherence of generated motions, and utilizes autoregressive inpainting to facilitate online motion inference with minimal latency (0.17 seconds). We show that our system provides an effective and robust solution that scales to a diverse dataset of over 200 hours of motion in complex indoor and outdoor environments.
View on arXiv@article{guzov2025_2409.13426, title={ HMD^2: Environment-aware Motion Generation from Single Egocentric Head-Mounted Device }, author={ Vladimir Guzov and Yifeng Jiang and Fangzhou Hong and Gerard Pons-Moll and Richard Newcombe and C. Karen Liu and Yuting Ye and Lingni Ma }, journal={arXiv preprint arXiv:2409.13426}, year={ 2025 } }