ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.00923
  4. Cited By
Multi-attention Recurrent Network for Human Communication Comprehension

Multi-attention Recurrent Network for Human Communication Comprehension

3 February 2018
Amir Zadeh
Paul Pu Liang
Soujanya Poria
Prateek Vij
Min Zhang
Louis-Philippe Morency
ArXivPDFHTML

Papers citing "Multi-attention Recurrent Network for Human Communication Comprehension"

50 / 62 papers shown
Title
Multi-Modality Collaborative Learning for Sentiment Analysis
Multi-Modality Collaborative Learning for Sentiment Analysis
Shanmin Wang
Chengguang Liu
Qingshan Liu
42
0
0
21 Jan 2025
Enhancing Multi-Modal Video Sentiment Classification Through Semi-Supervised Clustering
Enhancing Multi-Modal Video Sentiment Classification Through Semi-Supervised Clustering
Mehrshad Saadatinia
Minoo Ahmadi
Armin Abdollahi
39
0
0
11 Jan 2025
End-to-end Semantic-centric Video-based Multimodal Affective Computing
End-to-end Semantic-centric Video-based Multimodal Affective Computing
Ronghao Lin
Ying Zeng
Sijie Mai
Haifeng Hu
VGen
53
0
0
14 Aug 2024
FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion
FuseMoE: Mixture-of-Experts Transformers for Fleximodal Fusion
Xing Han
Huy Nguyen
Carl Harris
Nhat Ho
Suchi Saria
MoE
77
16
0
05 Feb 2024
MERBench: A Unified Evaluation Benchmark for Multimodal Emotion
  Recognition
MERBench: A Unified Evaluation Benchmark for Multimodal Emotion Recognition
Zheng Lian
Guoying Zhao
Yong Ren
Hao Gu
Haiyang Sun
Lan Chen
Bin Liu
Jianhua Tao
28
12
0
07 Jan 2024
Modality-Collaborative Transformer with Hybrid Feature Reconstruction
  for Robust Emotion Recognition
Modality-Collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion Recognition
Chengxin Chen
Pengyuan Zhang
46
5
0
26 Dec 2023
Modality-invariant and Specific Prompting for Multimodal Human
  Perception Understanding
Modality-invariant and Specific Prompting for Multimodal Human Perception Understanding
Hao Sun
Ziwei Niu
Xinyao Yu
Jiaqing Liu
Yen-Wei Chen
Lanfen Lin
31
0
0
17 Nov 2023
Self-Supervised Learning for Audio-Based Emotion Recognition
Self-Supervised Learning for Audio-Based Emotion Recognition
Peranut Nimitsurachat
Peter Washington
30
3
0
23 Jul 2023
cross-modal fusion techniques for utterance-level emotion recognition
  from text and speech
cross-modal fusion techniques for utterance-level emotion recognition from text and speech
Jiacheng Luo
Huy P Phan
Joshua Reiss
29
11
0
05 Feb 2023
InterMulti:Multi-view Multimodal Interactions with Text-dominated
  Hierarchical High-order Fusion for Emotion Analysis
InterMulti:Multi-view Multimodal Interactions with Text-dominated Hierarchical High-order Fusion for Emotion Analysis
Feng Qiu
Wanzeng Kong
Yu-qiong Ding
36
2
0
20 Dec 2022
Multi-task Learning for Cross-Lingual Sentiment Analysis
Multi-task Learning for Cross-Lingual Sentiment Analysis
Gaurish Thakkar
Nives Mikelic Preradović
Marko Tadić
16
10
0
14 Dec 2022
A Self-Adjusting Fusion Representation Learning Model for Unaligned
  Text-Audio Sequences
A Self-Adjusting Fusion Representation Learning Model for Unaligned Text-Audio Sequences
Kaicheng Yang
Ruxuan Zhang
Hua Xu
Kai Gao
23
3
0
12 Nov 2022
MARLIN: Masked Autoencoder for facial video Representation LearnINg
MARLIN: Masked Autoencoder for facial video Representation LearnINg
Zhixi Cai
Shreya Ghosh
Kalin Stefanov
Abhinav Dhall
Jianfei Cai
Hamid Rezatofighi
Reza Haffari
Munawar Hayat
ViT
CVBM
27
60
0
12 Nov 2022
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal
  and Multimodal Representations
Multimodal Information Bottleneck: Learning Minimal Sufficient Unimodal and Multimodal Representations
Sijie Mai
Ying Zeng
Haifeng Hu
45
68
0
31 Oct 2022
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for
  Multimodal Sentiment Analysis
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for Multimodal Sentiment Analysis
Atsushi Ando
Ryo Masumura
Akihiko Takashima
Satoshi Suzuki
Naoki Makishima
Keita Suzuki
Takafumi Moriya
Takanori Ashihara
Hiroshi Sato
44
9
0
28 Oct 2022
Knowledge Transfer For On-Device Speech Emotion Recognition with Neural
  Structured Learning
Knowledge Transfer For On-Device Speech Emotion Recognition with Neural Structured Learning
Yi Chang
Zhao Ren
Thanh Tam Nguyen
Kun Qian
Björn W. Schuller
33
5
0
26 Oct 2022
Multimodal Contrastive Learning via Uni-Modal Coding and Cross-Modal
  Prediction for Multimodal Sentiment Analysis
Multimodal Contrastive Learning via Uni-Modal Coding and Cross-Modal Prediction for Multimodal Sentiment Analysis
Ronghao Lin
Haifeng Hu
SSL
28
15
0
26 Oct 2022
Exploring Interactions and Regulations in Collaborative Learning: An
  Interdisciplinary Multimodal Dataset
Exploring Interactions and Regulations in Collaborative Learning: An Interdisciplinary Multimodal Dataset
Yante Li
Yang Liu
K. Nguyen
Henglin Shi
Eija Vuorenmaa
S. Jarvela
Guoying Zhao
36
0
0
11 Oct 2022
Progressive Fusion for Multimodal Integration
Progressive Fusion for Multimodal Integration
Shiv Shankar
Laure Thompson
M. Fiterau
38
3
0
01 Sep 2022
Video-based Cross-modal Auxiliary Network for Multimodal Sentiment
  Analysis
Video-based Cross-modal Auxiliary Network for Multimodal Sentiment Analysis
Rongfei Chen
Wenju Zhou
Yang Li
Huiyu Zhou
26
19
0
30 Aug 2022
Cross-Modality Gated Attention Fusion for Multimodal Sentiment Analysis
Cross-Modality Gated Attention Fusion for Multimodal Sentiment Analysis
Ming-Xin Jiang
Shaoxiong Ji
26
3
0
25 Aug 2022
CubeMLP: An MLP-based Model for Multimodal Sentiment Analysis and
  Depression Estimation
CubeMLP: An MLP-based Model for Multimodal Sentiment Analysis and Depression Estimation
Hao Sun
Hongyi Wang
Jiaqing Liu
Yen-Wei Chen
Lanfen Lin
13
98
0
28 Jul 2022
Counterfactual Reasoning for Out-of-distribution Multimodal Sentiment
  Analysis
Counterfactual Reasoning for Out-of-distribution Multimodal Sentiment Analysis
Teng Sun
Wenjie Wang
Liqiang Jing
Yiran Cui
Xuemeng Song
Liqiang Nie
OODD
29
35
0
24 Jul 2022
EmoCaps: Emotion Capsule based Model for Conversational Emotion
  Recognition
EmoCaps: Emotion Capsule based Model for Conversational Emotion Recognition
Zaijing Li
Fengxiao Tang
Ming Zhao
Yusen Zhu
38
95
0
25 Mar 2022
M-SENA: An Integrated Platform for Multimodal Sentiment Analysis
M-SENA: An Integrated Platform for Multimodal Sentiment Analysis
Huisheng Mao
Ziqi Yuan
Hua Xu
Wenmeng Yu
Yihe Liu
Kai Gao
24
41
0
23 Mar 2022
MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis
MMLatch: Bottom-up Top-down Fusion for Multimodal Sentiment Analysis
Georgios Paraskevopoulos
Efthymios Georgiou
Alexandros Potamianos
19
26
0
24 Jan 2022
Tailor Versatile Multi-modal Learning for Multi-label Emotion
  Recognition
Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition
Yi Zhang
Mingyuan Chen
Jundong Shen
Chongjun Wang
21
59
0
15 Jan 2022
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
  Emotion Recognition
MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition
Jinming Zhao
Ruichen Li
Qin Jin
Xinchao Wang
Haizhou Li
19
25
0
27 Oct 2021
Dyadformer: A Multi-modal Transformer for Long-Range Modeling of Dyadic
  Interactions
Dyadformer: A Multi-modal Transformer for Long-Range Modeling of Dyadic Interactions
D. Curto
Albert Clapés
Javier Selva
Sorina Smeureanu
Julio C. S. Jacques Junior
...
G. Guilera
D. Leiva
T. Moeslund
Sergio Escalera
Cristina Palmero
51
29
0
20 Sep 2021
TEASEL: A Transformer-Based Speech-Prefixed Language Model
TEASEL: A Transformer-Based Speech-Prefixed Language Model
Mehdi Arjmand
M. Dousti
H. Moradi
33
18
0
12 Sep 2021
Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal
  Sentiment Analysis
Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment Analysis
Sijie Mai
Ying Zeng
Shuangjia Zheng
Haifeng Hu
30
117
0
04 Sep 2021
CTAL: Pre-training Cross-modal Transformer for Audio-and-Language
  Representations
CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
Hang Li
Yunxing Kang
Tianqiao Liu
Wenbiao Ding
Zitao Liu
38
17
0
01 Sep 2021
Improving Multimodal fusion via Mutual Dependency Maximisation
Improving Multimodal fusion via Mutual Dependency Maximisation
Pierre Colombo
E. Chapuis
Matthieu Labeau
Chloé Clavel
15
30
0
31 Aug 2021
Emotion Recognition from Multiple Modalities: Fundamentals and
  Methodologies
Emotion Recognition from Multiple Modalities: Fundamentals and Methodologies
Sicheng Zhao
Guoli Jia
Jufeng Yang
Guiguang Ding
Kurt Keutzer
19
104
0
18 Aug 2021
Graph Capsule Aggregation for Unaligned Multimodal Sequences
Graph Capsule Aggregation for Unaligned Multimodal Sequences
Jianfeng Wu
Sijie Mai
Haifeng Hu
21
32
0
17 Aug 2021
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal
  Sentiment Analysis
Bi-Bimodal Modality Fusion for Correlation-Controlled Multimodal Sentiment Analysis
Wei Han
Hui Chen
Alexander Gelbukh
Amir Zadeh
Louis-Philippe Morency
Soujanya Poria
29
176
0
28 Jul 2021
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment
  Analysis
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis
Xingbo Wang
Jianben He
Zhihua Jin
Muqiao Yang
Yong Wang
Huamin Qu
24
75
0
17 Jul 2021
Exercise? I thought you said Éxtra Fries': Leveraging Sentence
  Demarcations and Multi-hop Attention for Meme Affect Analysis
Exercise? I thought you said Éxtra Fries': Leveraging Sentence Demarcations and Multi-hop Attention for Meme Affect Analysis
Shraman Pramanick
Md. Shad Akhtar
Tanmoy Chakraborty
30
17
0
23 Mar 2021
The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset:
  Collection, Insights and Improvements
The Multimodal Sentiment Analysis in Car Reviews (MuSe-CaR) Dataset: Collection, Insights and Improvements
Lukas Stappen
Alice Baird
Lea Schumann
Björn Schuller
44
59
0
15 Jan 2021
Context-Aware Personality Inference in Dyadic Scenarios: Introducing the
  UDIVA Dataset
Context-Aware Personality Inference in Dyadic Scenarios: Introducing the UDIVA Dataset
Cristina Palmero
Javier Selva
Sorina Smeureanu
Julio C. S. Jacques Junior
Albert Clapés
...
Zejian Zhang
D. Gallardo-Pujol
G. Guilera
D. Leiva
Sergio Escalera
35
53
0
28 Dec 2020
Hierachical Delta-Attention Method for Multimodal Fusion
Kunjal Panchal
11
1
0
22 Nov 2020
Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment
  Analysis
Deep-HOSeq: Deep Higher Order Sequence Fusion for Multimodal Sentiment Analysis
Sunny Verma
Jiwei Wang
Zhefeng Ge
Rujia Shen
Fan Jin
Yang Wang
Fang Chen
Wei Liu
29
20
0
16 Oct 2020
Jointly Fine-Tuning "BERT-like" Self Supervised Models to Improve
  Multimodal Speech Emotion Recognition
Jointly Fine-Tuning "BERT-like" Self Supervised Models to Improve Multimodal Speech Emotion Recognition
Shamane Siriwardhana
Andrew Reis
Rivindu Weerasekera
Suranga Nanayakkara
21
112
0
15 Aug 2020
MISA: Modality-Invariant and -Specific Representations for Multimodal
  Sentiment Analysis
MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis
Devamanyu Hazarika
Roger Zimmermann
Soujanya Poria
21
676
0
07 May 2020
Beneath the Tip of the Iceberg: Current Challenges and New Directions in
  Sentiment Analysis Research
Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Rada Mihalcea
53
207
0
01 May 2020
Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection
  and Sentiment Analysis in Conversation
Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection and Sentiment Analysis in Conversation
Aman Shenoy
Ashish Sardana
18
104
0
19 Feb 2020
Factorized Multimodal Transformer for Multimodal Sequential Learning
Factorized Multimodal Transformer for Multimodal Sequential Learning
Amir Zadeh
Chengfeng Mao
Kelly Shi
Yiwei Zhang
Paul Pu Liang
Soujanya Poria
Louis-Philippe Morency
25
44
0
22 Nov 2019
Modality to Modality Translation: An Adversarial Representation Learning
  and Graph Fusion Network for Multimodal Fusion
Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion
Sijie Mai
Haifeng Hu
Songlong Xing
GAN
37
182
0
18 Nov 2019
DialogueGCN: A Graph Convolutional Neural Network for Emotion
  Recognition in Conversation
DialogueGCN: A Graph Convolutional Neural Network for Emotion Recognition in Conversation
Deepanway Ghosal
Navonil Majumder
Soujanya Poria
Niyati Chhaya
Alexander Gelbukh
55
509
0
30 Aug 2019
Multi-modal Sentiment Analysis using Deep Canonical Correlation Analysis
Multi-modal Sentiment Analysis using Deep Canonical Correlation Analysis
Zhongkai Sun
P. Sarma
W. Sethares
E. Bucy
24
23
0
15 Jul 2019
12
Next