ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2005.03545
  4. Cited By
MISA: Modality-Invariant and -Specific Representations for Multimodal
  Sentiment Analysis

MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis

7 May 2020
Devamanyu Hazarika
Roger Zimmermann
Soujanya Poria
ArXivPDFHTML

Papers citing "MISA: Modality-Invariant and -Specific Representations for Multimodal Sentiment Analysis"

13 / 13 papers shown
Title
PREMISE: Matching-based Prediction for Accurate Review Recommendation
PREMISE: Matching-based Prediction for Accurate Review Recommendation
Wei Han
Hui Chen
Soujanya Poria
19
0
0
02 May 2025
Towards Robust Multimodal Physiological Foundation Models: Handling Arbitrary Missing Modalities
Towards Robust Multimodal Physiological Foundation Models: Handling Arbitrary Missing Modalities
Xi Fu
Wei-Bang Jiang
Yi Ding
Cuntai Guan
36
0
0
28 Apr 2025
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
Multimodal Emotion Recognition using Audio-Video Transformer Fusion with Cross Attention
Joe Dhanith
Shravan Venkatraman
Modigari Narendra
Vigya Sharma
Santhosh Malarvannan
57
0
0
20 Feb 2025
Towards Explainable Multimodal Depression Recognition for Clinical Interviews
Wenjie Zheng
Qiming Xie
Zengzhi Wang
Jianfei Yu
Rui Xia
55
0
0
28 Jan 2025
Completed Feature Disentanglement Learning for Multimodal MRIs Analysis
Completed Feature Disentanglement Learning for Multimodal MRIs Analysis
Tianling Liu
Hongying Liu
Fanhua Shang
Lequan Yu
Tong Han
Liang Wan
18
1
0
06 Jul 2024
Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion Techniques
Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion Techniques
Yuanchao Li
Peter Bell
Catherine Lai
28
9
0
12 Jun 2024
TCAN: Text-oriented Cross Attention Network for Multimodal Sentiment Analysis
TCAN: Text-oriented Cross Attention Network for Multimodal Sentiment Analysis
Ming Zhou
Yunfei Feng
Ziqi Zhou
Kai Wang
Tong Wang
Dong-ming Yan
28
0
0
06 Apr 2024
TEASEL: A Transformer-Based Speech-Prefixed Language Model
TEASEL: A Transformer-Based Speech-Prefixed Language Model
Mehdi Arjmand
M. Dousti
H. Moradi
15
18
0
12 Sep 2021
Improving Multimodal Fusion with Hierarchical Mutual Information
  Maximization for Multimodal Sentiment Analysis
Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis
Wei Han
Hui Chen
Soujanya Poria
8
307
0
01 Sep 2021
M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in
  Conversations
M2H2: A Multimodal Multiparty Hindi Dataset For Humor Recognition in Conversations
Dushyant Singh Chauhan
G. Singh
Navonil Majumder
Amir Zadeh
Asif Ekbal
P. Bhattacharyya
Louis-Philippe Morency
Soujanya Poria
9
17
0
03 Aug 2021
Learning Modality-Specific Representations with Self-Supervised
  Multi-Task Learning for Multimodal Sentiment Analysis
Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis
Wenmeng Yu
Hua Xu
Ziqi Yuan
Jiele Wu
SSL
37
430
0
09 Feb 2021
Supervised Multimodal Bitransformers for Classifying Images and Text
Supervised Multimodal Bitransformers for Classifying Images and Text
Douwe Kiela
Suvrat Bhooshan
Hamed Firooz
Ethan Perez
Davide Testuggine
51
238
0
06 Sep 2019
Multimodal Compact Bilinear Pooling for Visual Question Answering and
  Visual Grounding
Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding
Akira Fukui
Dong Huk Park
Daylen Yang
Anna Rohrbach
Trevor Darrell
Marcus Rohrbach
136
1,403
0
06 Jun 2016
1