ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.00846
  4. Cited By
Attentive Modality Hopping Mechanism for Speech Emotion Recognition
v1v2 (latest)

Attentive Modality Hopping Mechanism for Speech Emotion Recognition

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019
29 November 2019
Seunghyun Yoon
S. Dey
Hwanhee Lee
Kyomin Jung
ArXiv (abs)PDFHTML

Papers citing "Attentive Modality Hopping Mechanism for Speech Emotion Recognition"

9 / 9 papers shown
Title
MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial
  Expression Recognition in-the-wild
MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild
K. Chumachenko
Alexandros Iosifidis
Moncef Gabbouj
148
23
0
13 Apr 2024
HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised
  Audio-Visual Emotion Recognition
HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion RecognitionInformation Fusion (Inf. Fusion), 2024
Guoying Zhao
Zheng Lian
Yinan Han
Jianhua Tao
268
62
0
11 Jan 2024
An Empirical Study and Improvement for Speech Emotion Recognition
An Empirical Study and Improvement for Speech Emotion RecognitionIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2023
Zhanghua Wu
Yizhe Lu
Xinyu Dai
219
5
0
08 Apr 2023
Multimodal Speech Emotion Recognition using Cross Attention with Aligned
  Audio and Text
Multimodal Speech Emotion Recognition using Cross Attention with Aligned Audio and TextInterspeech (Interspeech), 2020
Yoonhyung Lee
Seunghyun Yoon
Kyomin Jung
215
26
0
26 Jul 2022
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic
  Information
Speech Emotion Recognition with Co-Attention based Multi-level Acoustic InformationIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
Heqing Zou
Yuke Si
Chen Chen
D. Rajan
Chng Eng Siong
140
160
0
29 Mar 2022
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion
  Recognition?
Is Cross-Attention Preferable to Self-Attention for Multi-Modal Emotion Recognition?IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2022
Vandana Rajan
Alessio Brutti
Andrea Cavallaro
132
49
0
18 Feb 2022
A Two-stage Multi-modal Affect Analysis Framework for Children with
  Autism Spectrum Disorder
A Two-stage Multi-modal Affect Analysis Framework for Children with Autism Spectrum Disorder
Jicheng Li
Anjana Bhat
R. Barmaki
194
20
0
17 Jun 2021
Multi-Modal Emotion Detection with Transfer Learning
Multi-Modal Emotion Detection with Transfer Learning
Amith Ananthram
Kailash Saravanakumar
Jessica Huynh
Homayoon Beigi
229
5
0
13 Nov 2020
Multimodal Embeddings from Language Models
Multimodal Embeddings from Language ModelsIEEE Signal Processing Letters (SPL), 2019
Shao-Yen Tseng
P. Georgiou
Shrikanth Narayanan
136
15
0
10 Sep 2019
1