Emotion Recognition with Pre-Trained Transformers Using Multimodal
Signals
Affective Computing and Intelligent Interaction (ACII), 2022
Abstract
In this paper, we address the problem of multimodal emotion recognition from multiple physiological signals. We demonstrate that a Transformer-based approach is suitable for this task. In addition, we present how such models may be pretrained in a multimodal scenario to improve emotion recognition performances. We evaluate the benefits of using multimodal inputs and pre-training with our approach on a state-ofthe-art dataset.
View on arXivComments on this paper
