44
2

Two Views Are Better than One: Monocular 3D Pose Estimation with Multiview Consistency

Abstract

Deducing a 3D human pose from a single 2D image is inherently challenging because multiple 3D poses can correspond to the same 2D representation. 3D data can resolve this pose ambiguity, but it is expensive to record and requires an intricate setup that is often restricted to controlled lab environments. We propose a method that improves the performance of deep learning-based monocular 3D human pose estimation models by using multiview data only during training, but not during inference. We introduce a novel loss function, consistency loss, which operates on two synchronized views. This approach is simpler than previous models that require 3D ground truth or intrinsic and extrinsic camera parameters. Our consistency loss penalizes differences in two pose sequences after rigid alignment. We also demonstrate that our consistency loss substantially improves performance for fine-tuning without requiring 3D data. Furthermore, we show that using our consistency loss can yield state-of-the-art performance when training models from scratch in a semi-supervised manner. Our findings provide a simple way to capture new data, e.g in a new domain. This data can be added using off-the-shelf cameras with no calibration requirements. We make all our code and data publicly available.

View on arXiv
@article{ingwersen2025_2311.12421,
  title={ Two Views Are Better than One: Monocular 3D Pose Estimation with Multiview Consistency },
  author={ Christian Keilstrup Ingwersen and Rasmus Tirsgaard and Rasmus Nylander and Janus Nørtoft Jensen and Anders Bjorholm Dahl and Morten Rieger Hannemose },
  journal={arXiv preprint arXiv:2311.12421},
  year={ 2025 }
}
Comments on this paper