ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.08981
  4. Cited By
MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised
  Learning
v1v2 (latest)

MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning

ACM Multimedia (ACM MM), 2023
18 April 2023
Zheng Lian
Haiyang Sun
Guoying Zhao
Kang Chen
Mingyu Xu
Kexin Wang
Ke Xu
Yu He
Ying Li
Jinming Zhao
Ye Liu
B. Liu
Jiangyan Yi
Meng Wang
Xiaoshi Zhong
Guoying Zhao
Björn W. Schuller
Jianhua Tao
ArXiv (abs)PDFHTML

Papers citing "MER 2023: Multi-label Learning, Modality Robustness, and Semi-Supervised Learning"

28 / 28 papers shown
Title
When One Modality Sabotages the Others: A Diagnostic Lens on Multimodal Reasoning
When One Modality Sabotages the Others: A Diagnostic Lens on Multimodal Reasoning
Chenyu Zhang
Minsol Kim
Shohreh Ghorbani
Jingyao Wu
Rosalind Picard
Patricia Maes
Paul Pu Liang
58
1
0
04 Nov 2025
VidEmo: Affective-Tree Reasoning for Emotion-Centric Video Foundation Models
VidEmo: Affective-Tree Reasoning for Emotion-Centric Video Foundation Models
Zhicheng Zhang
Weicheng Wang
Yongjie Zhu
Wenyu Qin
Pengfei Wan
Di Zhang
Jufeng Yang
84
0
0
04 Nov 2025
Multimodal Large Language Models Meet Multimodal Emotion Recognition and Reasoning: A Survey
Multimodal Large Language Models Meet Multimodal Emotion Recognition and Reasoning: A Survey
Yuntao Shou
Tao Meng
Wei Ai
Keqin Li
LRM
130
3
0
29 Sep 2025
StableToken: A Noise-Robust Semantic Speech Tokenizer for Resilient SpeechLLMs
StableToken: A Noise-Robust Semantic Speech Tokenizer for Resilient SpeechLLMs
Yuhan Song
Linhao Zhang
Chuhan Wu
Aiwei Liu
Wei Jia
Houfeng Wang
Xiao-bin Zhou
97
0
0
26 Sep 2025
A Unified Evaluation Framework for Multi-Annotator Tendency Learning
A Unified Evaluation Framework for Multi-Annotator Tendency Learning
Liyun Zhang
Jingcheng Ke
Shenli Fan
Xuanmeng Sha
Zheng Lian
108
0
0
14 Aug 2025
Benchmarking and Bridging Emotion Conflicts for Multimodal Emotion Reasoning
Benchmarking and Bridging Emotion Conflicts for Multimodal Emotion Reasoning
Zhiyuan Han
Beier Zhu
Yanlong Xu
Peipei Song
Xun Yang
126
3
0
02 Aug 2025
Multimodal Video Emotion Recognition with Reliable Reasoning Priors
Multimodal Video Emotion Recognition with Reliable Reasoning Priors
Zhepeng Wang
Yingjian Zhu
Guanghao Dong
Hongzhu Yi
F. Chen
Xinming Wang
Jun Xie
72
0
0
29 Jul 2025
QuMAB: Query-based Multi-Annotator Behavior Modeling with Reliability under Sparse Labels
QuMAB: Query-based Multi-Annotator Behavior Modeling with Reliability under Sparse Labels
Liyun Zhang
Zheng Lian
Hong Liu
Takanori Takebe
Yuta Nakashima
166
0
0
23 Jul 2025
EmoSign: A Multimodal Dataset for Understanding Emotions in American Sign Language
EmoSign: A Multimodal Dataset for Understanding Emotions in American Sign Language
Phoebe Chua
Cathy Mengying Fang
Takehiko Ohkawa
Raja Kushalnagar
Suranga Nanayakkara
Pattie Maes
SLR
227
1
0
20 May 2025
Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning
Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning
Ashutosh Chaubey
Xulang Guan
Mohammad Soleymani
CVBMMLLMVLM
251
1
0
09 Apr 2025
Spark-TTS: An Efficient LLM-Based Text-to-Speech Model with Single-Stream Decoupled Speech Tokens
Xiang Wang
Mingqi Jiang
Tianhao Shen
Ziyu Zhang
Shixuan Liu
...
Zhifei Li
Xie Chen
Lei Xie
Xu Tan
Wei Xue
241
89
0
03 Mar 2025
OSUM: Advancing Open Speech Understanding Models with Limited Resources in Academia
OSUM: Advancing Open Speech Understanding Models with Limited Resources in Academia
Xuelong Geng
Kun Wei
Qijie Shao
Shuiyun Liu
Zhennan Lin
...
Yuhang Dai
Xinfa Zhu
Yue Li
Li Zhang
Lei Xie
251
20
0
23 Jan 2025
Enhancing Multimodal Sentiment Analysis for Missing Modality through Self-Distillation and Unified Modality Cross-Attention
Enhancing Multimodal Sentiment Analysis for Missing Modality through Self-Distillation and Unified Modality Cross-AttentionIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
Yuzhe Weng
Haotian Wang
Tian Gao
Kewei Li
Shutong Niu
Jun Du
285
1
0
19 Oct 2024
Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and
  Benchmark
Open-vocabulary Multimodal Emotion Recognition: Dataset, Metric, and Benchmark
Zheng Lian
Haiyang Sun
Guoying Zhao
Lan Chen
Haoyu Chen
...
Rui Liu
Shan Liang
Ya Li
Jiangyan Yi
Jianhua Tao
VLM
261
6
0
02 Oct 2024
Early Joint Learning of Emotion Information Makes MultiModal Model
  Understand You Better
Early Joint Learning of Emotion Information Makes MultiModal Model Understand You Better
Mengying Ge
Mingyang Li
Dongkai Tang
Pengbo Li
Kuo Liu
Shuhao Deng
Songbai Pu
Liu Liu
Yang Song
Tao Zhang
170
7
0
12 Sep 2024
Audio-Guided Fusion Techniques for Multimodal Emotion Analysis
Audio-Guided Fusion Techniques for Multimodal Emotion Analysis
Pujin Shi
Fei Gao
194
4
0
08 Sep 2024
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion
  Recognition with Limited Labeled Samples
Leveraging Contrastive Learning and Self-Training for Multimodal Emotion Recognition with Limited Labeled Samples
Qi Fan
Yutong Li
Yi Xin
Xinyu Cheng
Guanglai Gao
Miao Ma
209
10
0
23 Aug 2024
SZTU-CMU at MER2024: Improving Emotion-LLaMA with Conv-Attention for
  Multimodal Emotion Recognition
SZTU-CMU at MER2024: Improving Emotion-LLaMA with Conv-Attention for Multimodal Emotion Recognition
Zebang Cheng
Shuyuan Tu
Dawei Huang
Heng Li
Xiaojiang Peng
Zhi-Qi Cheng
Alexander G. Hauptmann
328
7
0
20 Aug 2024
Emotion and Intent Joint Understanding in Multimodal Conversation: A
  Benchmarking Dataset
Emotion and Intent Joint Understanding in Multimodal Conversation: A Benchmarking Dataset
Rui Liu
Haolin Zuo
Zheng Lian
Xiaofen Xing
Björn W. Schuller
Haizhou Li
237
20
0
03 Jul 2024
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with
  Instruction Tuning
Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning
Zebang Cheng
Zhi-Qi Cheng
Jun-Yan He
Yuxuan Zhou
Kai Wang
Yuxiang Lin
Zheng Lian
Xiaojiang Peng
Alexander G. Hauptmann
MLLM
200
105
0
17 Jun 2024
Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A
  Survey
Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey
Hao Yang
Yanyan Zhao
Yang Wu
Shilong Wang
Tian Zheng
Hongbo Zhang
Zongyang Ma
Wanxiang Che
Bing Qin
310
29
0
12 Jun 2024
EmoBox: Multilingual Multi-corpus Speech Emotion Recognition Toolkit and
  Benchmark
EmoBox: Multilingual Multi-corpus Speech Emotion Recognition Toolkit and Benchmark
Ziyang Ma
Mingjie Chen
Hezhao Zhang
Zhisheng Zheng
Wenxi Chen
Xiquan Li
Jiaxin Ye
Xie Chen
Thomas Hain
221
42
0
11 Jun 2024
Evaluation of data inconsistency for multi-modal sentiment analysis
Evaluation of data inconsistency for multi-modal sentiment analysis
Yufei Wang
Mengyue Wu
220
2
0
05 Jun 2024
MER 2024: Semi-Supervised Learning, Noise Robustness, and
  Open-Vocabulary Multimodal Emotion Recognition
MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Zheng Lian
Haiyang Sun
Guoying Zhao
Zhuofan Wen
Siyuan Zhang
...
Yinan Han
Xiaoshi Zhong
Guoying Zhao
Björn W. Schuller
Jianhua Tao
VLM
298
31
0
26 Apr 2024
HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised
  Audio-Visual Emotion Recognition
HiCMAE: Hierarchical Contrastive Masked Autoencoder for Self-Supervised Audio-Visual Emotion RecognitionInformation Fusion (Inf. Fusion), 2024
Guoying Zhao
Zheng Lian
Yinan Han
Jianhua Tao
216
58
0
11 Jan 2024
GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion
  Recognition
GPT-4V with Emotion: A Zero-shot Benchmark for Generalized Emotion Recognition
Zheng Lian
Guoying Zhao
Haiyang Sun
Kang Chen
Zhuofan Wen
Hao Gu
Yinan Han
Jianhua Tao
226
71
0
07 Dec 2023
Explainable Multimodal Emotion Recognition
Explainable Multimodal Emotion Recognition
Zheng Lian
Haiyang Sun
Guoying Zhao
Hao Gu
Zhuofan Wen
...
Shan Liang
Ya Li
Jiangyan Yi
B. Liu
Jianhua Tao
MLLM
255
9
0
27 Jun 2023
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion Recognition
VISTANet: VIsual Spoken Textual Additive Net for Interpretable Multimodal Emotion RecognitionIEEE Transactions on Affective Computing (IEEE TAC), 2022
Puneet Kumar
Sarthak Malik
Balasubramanian Raman
Amritpal Singh
383
5
0
24 Aug 2022
1