Semantics-aware Test-time Adaptation for 3D Human Pose Estimation

This work highlights a semantics misalignment in 3D human pose estimation. For the task of test-time adaptation, the misalignment manifests as overly smoothed and unguided predictions. The smoothing settles predictions towards some average pose. Furthermore, when there are occlusions or truncations, the adaptation becomes fully unguided. To this end, we pioneer the integration of a semantics-aware motion prior for the test-time adaptation of 3D pose estimation. We leverage video understanding and a well-structured motion-text space to adapt the model motion prediction to adhere to video semantics during test time. Additionally, we incorporate a missing 2D pose completion based on the motion-text similarity. The pose completion strengthens the motion prior's guidance for occlusions and truncations. Our method significantly improves state-of-the-art 3D human pose estimation TTA techniques, with more than 12% decrease in PA-MPJPE on 3DPW and 3DHP.
View on arXiv@article{lin2025_2502.10724, title={ Semantics-aware Test-time Adaptation for 3D Human Pose Estimation }, author={ Qiuxia Lin and Rongyu Chen and Kerui Gu and Angela Yao }, journal={arXiv preprint arXiv:2502.10724}, year={ 2025 } }