317

2D-3D Attention and Entropy for Pose Robust 2D Facial Recognition

IEEE International Conference on Automatic Face & Gesture Recognition (FG), 2025
Main:8 Pages
5 Figures
Bibliography:3 Pages
4 Tables
Abstract

Despite recent advances in facial recognition, there remains a fundamental issue concerning degradations in performance due to substantial perspective (pose) differences between enrollment and query (probe) imagery. Therefore, we propose a novel domain adaptive framework to facilitate improved performances across large discrepancies in pose by enabling image-based (2D) representations to infer properties of inherently pose invariant point cloud (3D) representations. Specifically, our proposed framework achieves better pose invariance by using (1) a shared (joint) attention mapping to emphasize common patterns that are most correlated between 2D facial images and 3D facial data and (2) a joint entropy regularizing loss to promote better consistency\unicodex2014\unicode{x2014}enhancing correlations among the intersecting 2D and 3D representations\unicodex2014\unicode{x2014}by leveraging both attention maps. This framework is evaluated on FaceScape and ARL-VTF datasets, where it outperforms competitive methods by achieving profile (90\unicodex00b0\unicode{x00b0}\unicodex002b\unicode{x002b}) TAR @ 1\unicodex0025\unicode{x0025} FAR improvements of at least 7.1\unicodex0025\unicode{x0025} and 1.57\unicodex0025\unicode{x0025}, respectively.

View on arXiv
Comments on this paper