Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1811.09362
Cited By
Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors
23 November 2018
Yansen Wang
Ying Shen
Zhun Liu
Paul Pu Liang
Amir Zadeh
Louis-Philippe Morency
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Words Can Shift: Dynamically Adjusting Word Representations Using Nonverbal Behaviors"
18 / 18 papers shown
Title
TACFN: Transformer-based Adaptive Cross-modal Fusion Network for Multimodal Emotion Recognition
Feng Liu
Ziwang Fu
Y. Wang
Qijian Zheng
22
4
0
10 May 2025
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
Joe Dhanith
Shravan Venkatraman
Modigari Narendra
Vigya Sharma
Santhosh Malarvannan
69
0
0
20 Feb 2025
TCAN: Text-oriented Cross Attention Network for Multimodal Sentiment Analysis
Ming Zhou
Yunfei Feng
Ziqi Zhou
Kai Wang
Tong Wang
Dong-ming Yan
39
0
0
06 Apr 2024
Multimodal Sentiment Analysis with Missing Modality: A Knowledge-Transfer Approach
Weide Liu
Huijing Zhan
Hao Chen
Fengmao Lv
16
1
0
28 Dec 2023
Modality-Collaborative Transformer with Hybrid Feature Reconstruction for Robust Emotion Recognition
Chengxin Chen
Pengyuan Zhang
24
5
0
26 Dec 2023
Shared and Private Information Learning in Multimodal Sentiment Analysis with Deep Modal Alignment and Self-supervised Multi-Task Learning
Songning Lai
Jiakang Li
Guinan Guo
Xifeng Hu
Yulong Li
...
Yutong Liu
Zhaoxia Ren
Chun Wan
Danmin Miao
Zhi Liu
SSL
33
9
0
15 May 2023
EffMulti: Efficiently Modeling Complex Multimodal Interactions for Emotion Analysis
Feng Qiu
Chengyang Xie
Yu-qiong Ding
Wanzeng Kong
13
1
0
16 Dec 2022
UniMSE: Towards Unified Multimodal Sentiment Analysis and Emotion Recognition
Guimin Hu
Ting-En Lin
Yi Zhao
Guangming Lu
Yuchuan Wu
Yongbin Li
25
108
0
21 Nov 2022
Make Acoustic and Visual Cues Matter: CH-SIMS v2.0 Dataset and AV-Mixup Consistent Module
Yih-Ling Liu
Ziqi Yuan
Huisheng Mao
Zhiyun Liang
Wanqiuyue Yang
Yuanzhe Qiu
Tie Cheng
Xiaoteng Li
Hua Xu
Kai Gao
18
44
0
22 Aug 2022
COLD Fusion: Calibrated and Ordinal Latent Distribution Fusion for Uncertainty-Aware Multimodal Emotion Recognition
M. Tellamekala
Shahin Amiriparian
Björn W. Schuller
Elisabeth André
T. Giesbrecht
M. Valstar
16
25
0
12 Jun 2022
i-Code: An Integrative and Composable Multimodal Learning Framework
Ziyi Yang
Yuwei Fang
Chenguang Zhu
Reid Pryzant
Dongdong Chen
...
Bin Xiao
Yuanxun Lu
Takuya Yoshioka
Michael Zeng
Xuedong Huang
35
45
0
03 May 2022
Tailor Versatile Multi-modal Learning for Multi-label Emotion Recognition
Yi Zhang
Mingyuan Chen
Jundong Shen
Chongjun Wang
12
58
0
15 Jan 2022
TEASEL: A Transformer-Based Speech-Prefixed Language Model
Mehdi Arjmand
M. Dousti
H. Moradi
25
18
0
12 Sep 2021
Localize, Group, and Select: Boosting Text-VQA by Scene Text Modeling
Xiaopeng Lu
Zhenhua Fan
Yansen Wang
Jean Oh
Carolyn Rose
8
27
0
20 Aug 2021
Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis
Wenmeng Yu
Hua Xu
Ziqi Yuan
Jiele Wu
SSL
45
430
0
09 Feb 2021
Audio-Visual Event Localization via Recursive Fusion by Joint Co-Attention
Bin Duan
Hao Tang
Wei Wang
Ziliang Zong
Guowei Yang
Yan Yan
17
57
0
14 Aug 2020
Probabilistic FastText for Multi-Sense Word Embeddings
Ben Athiwaratkun
A. Wilson
Anima Anandkumar
10
136
0
07 Jun 2018
Dynamic Word Embeddings
Robert Bamler
Stephan Mandt
BDL
150
226
0
27 Feb 2017
1