100
v1v2 (latest)

SLYKLatent: A Learning Framework for Gaze Estimation Using Deep Facial Feature Learning

IEEE Transactions on Human-Machine Systems (IEEE Trans. Human-Machine Syst.), 2024
Main:12 Pages
8 Figures
Bibliography:2 Pages
Appendix:2 Pages
Abstract

In this research, we present SLYKLatent, a novel approach for enhancing gaze estimation by addressing appearance instability challenges in datasets due to aleatoric uncertainties, covariant shifts, and test domain generalization. SLYKLatent utilizes Self-Supervised Learning for initial training with facial expression datasets, followed by refinement with a patch-based tri-branch network and an inverse explained variance-weighted training loss function. Our evaluation on benchmark datasets achieves a 10.9% improvement on Gaze360, supersedes top MPIIFaceGaze results with 3.8%, and leads on a subset of ETH-XGaze by 11.6%, surpassing existing methods by significant margins. Adaptability tests on RAF-DB and Affectnet show 86.4% and 60.9% accuracies, respectively. Ablation studies confirm the effectiveness of SLYKLatent's novel components.

View on arXiv
Comments on this paper