2D-3D Attention and Entropy for Pose Robust 2D Facial Recognition

Despite recent advances in facial recognition, there remains a fundamental issue concerning degradations in performance due to substantial perspective (pose) differences between enrollment and query (probe) imagery. Therefore, we propose a novel domain adaptive framework to facilitate improved performances across large discrepancies in pose by enabling image-based (2D) representations to infer properties of inherently pose invariant point cloud (3D) representations. Specifically, our proposed framework achieves better pose invariance by using (1) a shared (joint) attention mapping to emphasize common patterns that are most correlated between 2D facial images and 3D facial data and (2) a joint entropy regularizing loss to promote better consistencyenhancing correlations among the intersecting 2D and 3D representationsby leveraging both attention maps. This framework is evaluated on FaceScape and ARL-VTF datasets, where it outperforms competitive methods by achieving profile (90) TAR @ 1 FAR improvements of at least 7.1 and 1.57, respectively.
View on arXiv@article{peace2025_2505.09073, title={ 2D-3D Attention and Entropy for Pose Robust 2D Facial Recognition }, author={ J. Brennan Peace and Shuowen Hu and Benjamin S. Riggan }, journal={arXiv preprint arXiv:2505.09073}, year={ 2025 } }