ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13583
  4. Cited By
Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical
  Fusion for Multimodal Affect Recognition

Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical Fusion for Multimodal Affect Recognition

23 May 2023
Yaoting Wang
Yuanchao Li
Paul Pu Liang
Louis-Philippe Morency
P. Bell
Catherine Lai
    CVBM
ArXivPDFHTML

Papers citing "Cross-Attention is Not Enough: Incongruity-Aware Dynamic Hierarchical Fusion for Multimodal Affect Recognition"

2 / 2 papers shown
Title
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for
  Multimodal Sentiment Analysis
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for Multimodal Sentiment Analysis
Atsushi Ando
Ryo Masumura
Akihiko Takashima
Satoshi Suzuki
Naoki Makishima
Keita Suzuki
Takafumi Moriya
Takanori Ashihara
Hiroshi Sato
24
7
0
28 Oct 2022
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion
  Recognition?
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?
Vandana Rajan
A. Brutti
Andrea Cavallaro
29
26
0
18 Feb 2022
1