5
0

TAGF: Time-aware Gated Fusion for Multimodal Valence-Arousal Estimation

Yubeen Lee
Sangeun Lee
Chaewon Park
Junyeop Cha
Eunil Park
Main:6 Pages
2 Figures
Bibliography:3 Pages
2 Tables
Abstract

Multimodal emotion recognition often suffers from performance degradation in valence-arousal estimation due to noise and misalignment between audio and visual modalities. To address this challenge, we introduce TAGF, a Time-aware Gated Fusion framework for multimodal emotion recognition. The TAGF adaptively modulates the contribution of recursive attention outputs based on temporal dynamics. Specifically, the TAGF incorporates a BiLSTM-based temporal gating mechanism to learn the relative importance of each recursive step and effectively integrates multistep cross-modal features. By embedding temporal awareness into the recursive fusion process, the TAGF effectively captures the sequential evolution of emotional expressions and the complex interplay between modalities. Experimental results on the Aff-Wild2 dataset demonstrate that TAGF achieves competitive performance compared with existing recursive attention-based models. Furthermore, TAGF exhibits strong robustness to cross-modal misalignment and reliably models dynamic emotional transitions in real-world conditions.

View on arXiv
@article{lee2025_2507.02080,
  title={ TAGF: Time-aware Gated Fusion for Multimodal Valence-Arousal Estimation },
  author={ Yubeen Lee and Sangeun Lee and Chaewon Park and Junyeop Cha and Eunil Park },
  journal={arXiv preprint arXiv:2507.02080},
  year={ 2025 }
}
Comments on this paper