Facial expression recognition (FER) is a subset of computer vision with important applications for human-computer-interaction, healthcare, and customer service. FER represents a challenging problem-space because accurate classification requires a model to differentiate between subtle changes in facial features. In this paper, we examine the use of multi-modal transfer learning to improve performance on a challenging video-based FER dataset, Dynamic Facial Expression in-the-Wild (DFEW). Using a combination of pretrained ResNets, OpenPose, and OmniVec networks, we explore the impact of cross-temporal, multi-modal features on classification accuracy. Ultimately, we find that these finely-tuned multi-modal feature generators modestly improve accuracy of our transformer-based classification model.
View on arXiv@article{engel2025_2504.21248, title={ Multi-modal Transfer Learning for Dynamic Facial Emotion Recognition in the Wild }, author={ Ezra Engel and Lishan Li and Chris Hudy and Robert Schleusner }, journal={arXiv preprint arXiv:2504.21248}, year={ 2025 } }