ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12699
24
0

Unsupervised Cross-Domain 3D Human Pose Estimation via Pseudo-Label-Guided Global Transforms

17 April 2025
Jingjing Liu
Zhiyong Wang
Xinyu Fan
Amirhossein Dadashzadeh
Honghai Liu
Majid Mirmehdi
ArXivPDFHTML
Abstract

Existing 3D human pose estimation methods often suffer in performance, when applied to cross-scenario inference, due to domain shifts in characteristics such as camera viewpoint, position, posture, and body size. Among these factors, camera viewpoints and locations {have been shown} to contribute significantly to the domain gap by influencing the global positions of human poses. To address this, we propose a novel framework that explicitly conducts global transformations between pose positions in the camera coordinate systems of source and target domains. We start with a Pseudo-Label Generation Module that is applied to the 2D poses of the target dataset to generate pseudo-3D poses. Then, a Global Transformation Module leverages a human-centered coordinate system as a novel bridging mechanism to seamlessly align the positional orientations of poses across disparate domains, ensuring consistent spatial referencing. To further enhance generalization, a Pose Augmentor is incorporated to address variations in human posture and body size. This process is iterative, allowing refined pseudo-labels to progressively improve guidance for domain adaptation. Our method is evaluated on various cross-dataset benchmarks, including Human3.6M, MPI-INF-3DHP, and 3DPW. The proposed method outperforms state-of-the-art approaches and even outperforms the target-trained model.

View on arXiv
@article{liu2025_2504.12699,
  title={ Unsupervised Cross-Domain 3D Human Pose Estimation via Pseudo-Label-Guided Global Transforms },
  author={ Jingjing Liu and Zhiyong Wang and Xinyu Fan and Amirhossein Dadashzadeh and Honghai Liu and Majid Mirmehdi },
  journal={arXiv preprint arXiv:2504.12699},
  year={ 2025 }
}
Comments on this paper