ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.06910
  4. Cited By
HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion
  Recognition

HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition

14 April 2023
Soumya Dutta
Sriram Ganapathy
ArXivPDFHTML

Papers citing "HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition"

4 / 4 papers shown
Title
Bimodal Connection Attention Fusion for Speech Emotion Recognition
Bimodal Connection Attention Fusion for Speech Emotion Recognition
Jiachen Luo
Huy Phan
Lin Wang
Joshua D. Reiss
46
0
0
08 Mar 2025
LyS at SemEval-2024 Task 3: An Early Prototype for End-to-End Multimodal
  Emotion Linking as Graph-Based Parsing
LyS at SemEval-2024 Task 3: An Early Prototype for End-to-End Multimodal Emotion Linking as Graph-Based Parsing
Ana Ezquerro
David Vilares
34
1
0
10 May 2024
Fusion approaches for emotion recognition from speech using acoustic and
  text-based features
Fusion approaches for emotion recognition from speech using acoustic and text-based features
L. Pepino
Pablo Riera
Luciana Ferrer
Agustin Gravano
35
48
0
27 Mar 2024
LEAF: A Learnable Frontend for Audio Classification
LEAF: A Learnable Frontend for Audio Classification
Neil Zeghidour
O. Teboul
Félix de Chaumont Quitry
Marco Tagliasacchi
VLM
AAML
77
141
0
21 Jan 2021
1