640

Facial Expression Recognition using Squeeze and Excitation-powered Swin Transformers

Abstract

The interpretation of facial emotions plays a crucial role in human communication, allowing people to recognize emotions such as happiness, sadness, and anger through facial expressions and vocal tones. Facial Emotion Recognition (FER) is an area of great interest in computer vision and AI, with extensive academic and commercial potential, including security, advertising, and entertainment. We present a FER framework based on Swin vision Transformers (SwinT) and squeeze and excitation block (SE), which utilizes a transformer model with an attention mechanism to address vision tasks. Our approach uses a vision transformer with SE and a sharpness-aware minimizer (SAM), as transformers typically require substantial data to be as efficient as other competitive models. Our challenge was to create a good FER model based on the SwinT configuration with the ability to detect facial emotions using a small amount of data. We used a hybrid dataset to train our model and evaluated its performance on the AffectNet dataset, achieving an F1-score of 0.5420. Our model outperformed the winner of the (ABAW) Competition, which was held in conjunction with the European Conference on Computer Vision (ECCV) 2022

View on arXiv
Comments on this paper