In 3D Human Motion Prediction (HMP), conventional methods train HMP models with expensive motion capture data. However, the data collection cost of such motion capture data limits the data diversity, which leads to poor generalizability to unseen motions or subjects. To address this issue, this paper proposes to enhance HMP with additional learning using estimated poses from easily available videos. The 2D poses estimated from the monocular videos are carefully transformed into motion capture-style 3D motions through our pipeline. By additional learning with the obtained motions, the HMP model is adapted to the test domain. The experimental results demonstrate the quantitative and qualitative impact of our method.
View on arXiv@article{shimbo2025_2505.07301, title={ Human Motion Prediction via Test-domain-aware Adaptation with Easily-available Human Motions Estimated from Videos }, author={ Katsuki Shimbo and Hiromu Taketsugu and Norimichi Ukita }, journal={arXiv preprint arXiv:2505.07301}, year={ 2025 } }