Facial expression recognition (FER) is a key research area in computer vision and human-computer interaction. Despite recent advances in deep learning, challenges persist, especially in generalizing to new scenarios. In fact, zero-shot FER significantly reduces the performance of state-of-the-art FER models. To address this problem, the community has recently started to explore the integration of knowledge from Large Language Models for visual tasks. In this work, we evaluate a broad collection of locally executed Visual Language Models (VLMs), avoiding the lack of task-specific knowledge by adopting a Visual Question Answering strategy. We compare the proposed pipeline with state-of-the-art FER models, both integrating and excluding VLMs, evaluating well-known FER benchmarks: AffectNet, FERPlus, and RAF-DB. The results show excellent performance for some VLMs in zero-shot FER scenarios, indicating the need for further exploration to improve FER generalization.
View on arXiv@article{castrillón-santana2025_2504.21309, title={ An Evaluation of a Visual Question Answering Strategy for Zero-shot Facial Expression Recognition in Still Images }, author={ Modesto Castrillón-Santana and Oliverio J Santana and David Freire-Obregón and Daniel Hernández-Sosa and Javier Lorenzo-Navarro }, journal={arXiv preprint arXiv:2504.21309}, year={ 2025 } }