ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.02508
  4. Cited By
MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in
  Conversations

MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations

5 October 2018
Soujanya Poria
Devamanyu Hazarika
Navonil Majumder
Gautam Naik
Erik Cambria
Rada Mihalcea
ArXivPDFHTML

Papers citing "MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversations"

50 / 403 papers shown
Title
Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation
Let's Go Real Talk: Spoken Dialogue Model for Face-to-Face Conversation
Se Jin Park
Chae Won Kim
Hyeongseop Rha
Minsu Kim
Joanna Hong
Jeong Hun Yeo
Yong Man Ro
CVBM
AuLLM
42
6
0
12 Jun 2024
Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion Techniques
Speech Emotion Recognition with ASR Transcripts: A Comprehensive Study on Word Error Rate and Fusion Techniques
Yuanchao Li
Peter Bell
Catherine Lai
43
9
0
12 Jun 2024
ExHuBERT: Enhancing HuBERT Through Block Extension and Fine-Tuning on 37
  Emotion Datasets
ExHuBERT: Enhancing HuBERT Through Block Extension and Fine-Tuning on 37 Emotion Datasets
Shahin Amiriparian
Filip Packañ
Maurice Gerczuk
Björn W. Schuller
21
4
0
11 Jun 2024
EmoBox: Multilingual Multi-corpus Speech Emotion Recognition Toolkit and
  Benchmark
EmoBox: Multilingual Multi-corpus Speech Emotion Recognition Toolkit and Benchmark
Ziyang Ma
Mingjie Chen
Hezhao Zhang
Zhisheng Zheng
Wenxi Chen
Xiquan Li
Jiaxin Ye
Xie Chen
Thomas Hain
30
13
0
11 Jun 2024
Representation Learning with Conditional Information Flow Maximization
Representation Learning with Conditional Information Flow Maximization
Dou Hu
Lingwei Wei
Wei Zhou
Songlin Hu
SSL
45
1
0
08 Jun 2024
Think out Loud: Emotion Deducing Explanation in Dialogues
Think out Loud: Emotion Deducing Explanation in Dialogues
JiangNan Li
Zheng-Shen Lin
Lanrui Wang
Q. Si
Yanan Cao
Mo Yu
Peng Fu
Weiping Wang
Jie Zhou
34
0
0
07 Jun 2024
BLSP-Emo: Towards Empathetic Large Speech-Language Models
BLSP-Emo: Towards Empathetic Large Speech-Language Models
Chen Wang
Minpeng Liao
Zhongqiang Huang
Junhong Wu
Chengqing Zong
Jiajun Zhang
VLM
AuLLM
38
4
0
06 Jun 2024
Evaluation of data inconsistency for multi-modal sentiment analysis
Evaluation of data inconsistency for multi-modal sentiment analysis
Yufei Wang
Mengyue Wu
34
1
0
05 Jun 2024
Enhancing Emotion Recognition in Conversation through Emotional
  Cross-Modal Fusion and Inter-class Contrastive Learning
Enhancing Emotion Recognition in Conversation through Emotional Cross-Modal Fusion and Inter-class Contrastive Learning
Haoxiang Shi
Xulong Zhang
Ning Cheng
Yong Zhang
Jun Yu
Jing Xiao
Jianzong Wang
26
1
0
28 May 2024
EmpathicStories++: A Multimodal Dataset for Empathy towards Personal
  Experiences
EmpathicStories++: A Multimodal Dataset for Empathy towards Personal Experiences
Jocelyn Shen
Y. Kim
Mohit Hulse
W. Zulfikar
Sharifa Alghowinem
C. Breazeal
Hae Won Park
34
6
0
24 May 2024
Emotion Identification for French in Written Texts: Considering their
  Modes of Expression as a Step Towards Text Complexity Analysis
Emotion Identification for French in Written Texts: Considering their Modes of Expression as a Step Towards Text Complexity Analysis
A. Étienne
Delphine Battistelli
Gwénolé Lecorvé
26
1
0
23 May 2024
MELD-ST: An Emotion-aware Speech Translation Dataset
MELD-ST: An Emotion-aware Speech Translation Dataset
Sirou Chen
Sakiko Yahata
Shuichiro Shimizu
Zhengdong Yang
Yihang Li
Chenhui Chu
Sadao Kurohashi
19
1
0
21 May 2024
Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal
  Utterances
Unsupervised Multimodal Clustering for Semantics Discovery in Multimodal Utterances
Hanlei Zhang
Hua Xu
Fei Long
Xin Wang
Kai Gao
46
3
0
21 May 2024
SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations
SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations
Fanfan Wang
Heqing Ma
Jianfei Yu
Rui Xia
Erik Cambria
40
22
0
19 May 2024
Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts
Uni-MoE: Scaling Unified Multimodal LLMs with Mixture of Experts
Yunxin Li
Shenyuan Jiang
Baotian Hu
Longyue Wang
Wanqi Zhong
Wenhan Luo
Lin Ma
Min-Ling Zhang
MoE
46
28
0
18 May 2024
Infer Induced Sentiment of Comment Response to Video: A New Task,
  Dataset and Baseline
Infer Induced Sentiment of Comment Response to Video: A New Task, Dataset and Baseline
Qi Jia
Baoyu Fan
Cong Xu
Lu Liu
Liang Jin
Guoguang Du
Zhenhua Guo
Yaqian Zhao
Xuanjing Huang
Rengang Li
37
0
0
15 May 2024
Designing and Evaluating Dialogue LLMs for Co-Creative Improvised
  Theatre
Designing and Evaluating Dialogue LLMs for Co-Creative Improvised Theatre
Boyd Branch
Piotr Wojciech Mirowski
Kory W. Mathewson
Sophia Ppali
A. Covaci
34
1
0
11 May 2024
LyS at SemEval-2024 Task 3: An Early Prototype for End-to-End Multimodal
  Emotion Linking as Graph-Based Parsing
LyS at SemEval-2024 Task 3: An Early Prototype for End-to-End Multimodal Emotion Linking as Graph-Based Parsing
Ana Ezquerro
David Vilares
38
1
0
10 May 2024
ESIHGNN: Event-State Interactions Infused Heterogeneous Graph Neural
  Network for Conversational Emotion Recognition
ESIHGNN: Event-State Interactions Infused Heterogeneous Graph Neural Network for Conversational Emotion Recognition
Xupeng Zha
Huan Zhao
Zixing Zhang
24
1
0
07 May 2024
New Benchmark Dataset and Fine-Grained Cross-Modal Fusion Framework for
  Vietnamese Multimodal Aspect-Category Sentiment Analysis
New Benchmark Dataset and Fine-Grained Cross-Modal Fusion Framework for Vietnamese Multimodal Aspect-Category Sentiment Analysis
Quy Hoang Nguyen
Minh-Van Truong Nguyen
Kiet Van Nguyen
27
2
0
01 May 2024
Revisiting Multimodal Emotion Recognition in Conversation from the
  Perspective of Graph Spectrum
Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum
Tao Meng
Fuchen Zhang
Yuntao Shou
Wei Ai
Nan Yin
Keqin Li
39
21
0
27 Apr 2024
Revisiting Multi-modal Emotion Learning with Broad State Space Models
  and Probability-guidance Fusion
Revisiting Multi-modal Emotion Learning with Broad State Space Models and Probability-guidance Fusion
Yuntao Shou
Tao Meng
Fuchen Zhang
Nan Yin
Keqin Li
Mamba
41
22
0
27 Apr 2024
Empirical Analysis of Dialogue Relation Extraction with Large Language
  Models
Empirical Analysis of Dialogue Relation Extraction with Large Language Models
Guozheng Li
Zijie Xu
Ziyu Shang
Jiajun Liu
Ke Ji
Yikai Guo
56
2
0
27 Apr 2024
MER 2024: Semi-Supervised Learning, Noise Robustness, and
  Open-Vocabulary Multimodal Emotion Recognition
MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Zheng Lian
Haiyang Sun
Guoying Zhao
Zhuofan Wen
Siyuan Zhang
...
Bin Liu
Erik Cambria
Guoying Zhao
Björn W. Schuller
Jianhua Tao
VLM
35
11
0
26 Apr 2024
Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage
  framework for Emotion-Cause Pair Extraction in Conversations
Samsung Research China-Beijing at SemEval-2024 Task 3: A multi-stage framework for Emotion-Cause Pair Extraction in Conversations
Shen Zhang
Haojie Zhang
Jing Zhang
Xudong Zhang
Yimeng Zhuang
Jinting Wu
47
2
0
25 Apr 2024
Context-Aware Siamese Networks for Efficient Emotion Recognition in
  Conversation
Context-Aware Siamese Networks for Efficient Emotion Recognition in Conversation
Barbara Gendron
Gaël Guibon
25
0
0
17 Apr 2024
Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression
  Recognition
Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition
Marah Halawa
Florian Blume
Pia Bideau
Martin Maier
Rasha Abdel Rahman
Olaf Hellwich
CVBM
36
1
0
16 Apr 2024
AIMDiT: Modality Augmentation and Interaction via Multimodal Dimension
  Transformation for Emotion Recognition in Conversations
AIMDiT: Modality Augmentation and Interaction via Multimodal Dimension Transformation for Emotion Recognition in Conversations
Sheng Wu
Jiaxing Liu
Longbiao Wang
Dongxiao He
Xiaobao Wang
Jianwu Dang
37
0
0
12 Apr 2024
IITK at SemEval-2024 Task 10: Who is the speaker? Improving Emotion
  Recognition and Flip Reasoning in Conversations via Speaker Embeddings
IITK at SemEval-2024 Task 10: Who is the speaker? Improving Emotion Recognition and Flip Reasoning in Conversations via Speaker Embeddings
Shubham Patel
Divyaksh Shukla
Ashutosh Modi
38
1
0
06 Apr 2024
Personality-affected Emotion Generation in Dialog Systems
Personality-affected Emotion Generation in Dialog Systems
Zhiyuan Wen
Jiannong Cao
Jiaxing Shen
Ruosong Yang
Shuaiqi Liu
Maosong Sun
35
3
0
03 Apr 2024
Token Trails: Navigating Contextual Depths in Conversational AI with
  ChatLLM
Token Trails: Navigating Contextual Depths in Conversational AI with ChatLLM
Md. Kowsher
Ritesh Panditi
Nusrat Jahan Prottasha
Prakash Bhat
A. Bairagi
M. Arefin
28
1
0
03 Apr 2024
LastResort at SemEval-2024 Task 3: Exploring Multimodal Emotion Cause
  Pair Extraction as Sequence Labelling Task
LastResort at SemEval-2024 Task 3: Exploring Multimodal Emotion Cause Pair Extraction as Sequence Labelling Task
Suyash Vardhan Mathur
Akshett Rai Jindal
Hardik Mittal
Manish Shrivastava
33
1
0
02 Apr 2024
MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in
  Conversations with Multimodal Language Models
MIPS at SemEval-2024 Task 3: Multimodal Emotion-Cause Pair Extraction in Conversations with Multimodal Language Models
Zebang Cheng
Fuqiang Niu
Yuxiang Lin
Zhi-Qi Cheng
Bowen Zhang
Xiaojiang Peng
31
7
0
31 Mar 2024
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion
  Cause
UniMEEC: Towards Unified Multimodal Emotion Recognition and Emotion Cause
Guimin Hu
Zhihong Zhu
Daniel Hershcovich
Hasti Seifi
Jiayuan Xie
27
7
0
30 Mar 2024
Emotion-Anchored Contrastive Learning Framework for Emotion Recognition
  in Conversation
Emotion-Anchored Contrastive Learning Framework for Emotion Recognition in Conversation
Fangxu Yu
Junjie Guo
Zhen Wu
Xinyu Dai
37
7
0
29 Mar 2024
Emotion Neural Transducer for Fine-Grained Speech Emotion Recognition
Emotion Neural Transducer for Fine-Grained Speech Emotion Recognition
Siyuan Shen
Yu Gao
Feng Liu
Hanyang Wang
Aimin Zhou
37
6
0
28 Mar 2024
The NeurIPS 2023 Machine Learning for Audio Workshop: Affective Audio
  Benchmarks and Novel Data
The NeurIPS 2023 Machine Learning for Audio Workshop: Affective Audio Benchmarks and Novel Data
Alice Baird
Rachel Manzelli
Panagiotis Tzirakis
Chris Gagne
Haoqi Li
Sadie Allen
Sander Dieleman
Brian Kulis
Shrikanth S. Narayanan
Alan S. Cowen
23
0
0
21 Mar 2024
Audio-Visual Compound Expression Recognition Method based on Late
  Modality Fusion and Rule-based Decision
Audio-Visual Compound Expression Recognition Method based on Late Modality Fusion and Rule-based Decision
E. Ryumina
M. Markitantov
D. Ryumin
Heysem Kaya
Alexey Karpov
40
6
0
19 Mar 2024
MIntRec2.0: A Large-scale Benchmark Dataset for Multimodal Intent
  Recognition and Out-of-scope Detection in Conversations
MIntRec2.0: A Large-scale Benchmark Dataset for Multimodal Intent Recognition and Out-of-scope Detection in Conversations
Hanlei Zhang
Xin Wang
Hua Xu
Qianrui Zhou
Kai Gao
Jianhua Su
jinyue Zhao
Wenrui Li
Yanting Chen
42
2
0
16 Mar 2024
JMI at SemEval 2024 Task 3: Two-step approach for multimodal ECAC using
  in-context learning with GPT and instruction-tuned Llama models
JMI at SemEval 2024 Task 3: Two-step approach for multimodal ECAC using in-context learning with GPT and instruction-tuned Llama models
Arefa
Mohammed Abbas Ansari
Chandni Saxena
Tanvir Ahmad
MLLM
34
2
0
05 Mar 2024
TopicDiff: A Topic-enriched Diffusion Approach for Multimodal
  Conversational Emotion Detection
TopicDiff: A Topic-enriched Diffusion Approach for Multimodal Conversational Emotion Detection
Jiamin Luo
Jingjing Wang
Guodong Zhou
27
1
0
04 Mar 2024
Emotion Analysis in NLP: Trends, Gaps and Roadmap for Future Directions
Emotion Analysis in NLP: Trends, Gaps and Roadmap for Future Directions
Flor Miriam Plaza del Arco
Alba Curry
A. C. Curry
Dirk Hovy
44
14
0
02 Mar 2024
Probing the Information Encoded in Neural-based Acoustic Models of
  Automatic Speech Recognition Systems
Probing the Information Encoded in Neural-based Acoustic Models of Automatic Speech Recognition Systems
Quentin Raymondaud
Mickael Rouvier
Richard Dufour
25
1
0
29 Feb 2024
SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in
  Conversation (EDiReF)
SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in Conversation (EDiReF)
Shivani Kumar
Md. Shad Akhtar
Erik Cambria
Tanmoy Chakraborty
LRM
29
23
0
29 Feb 2024
KoDialogBench: Evaluating Conversational Understanding of Language
  Models with Korean Dialogue Benchmark
KoDialogBench: Evaluating Conversational Understanding of Language Models with Korean Dialogue Benchmark
Seongbo Jang
Seonghyeon Lee
Hwanjo Yu
ELM
29
0
0
27 Feb 2024
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion
  Recognition
Curriculum Learning Meets Directed Acyclic Graph for Multimodal Emotion Recognition
Cam-Van Thi Nguyen
Cao-Bach Nguyen
Quang-Thuy Ha
Duc-Trong Le
29
0
0
27 Feb 2024
Advancing Large Language Models to Capture Varied Speaking Styles and
  Respond Properly in Spoken Conversations
Advancing Large Language Models to Capture Varied Speaking Styles and Respond Properly in Spoken Conversations
Guan-Ting Lin
Cheng-Han Chiang
Hung-yi Lee
34
22
0
20 Feb 2024
EmoBench: Evaluating the Emotional Intelligence of Large Language Models
EmoBench: Evaluating the Emotional Intelligence of Large Language Models
Sahand Sabour
Siyang Liu
Zheyuan Zhang
June M. Liu
Jinfeng Zhou
Alvionna S. Sunaryo
Juanzi Li
Tatia M.C. Lee
Rada Mihalcea
Minlie Huang
32
12
0
19 Feb 2024
Speech Translation with Speech Foundation Models and Large Language
  Models: What is There and What is Missing?
Speech Translation with Speech Foundation Models and Large Language Models: What is There and What is Missing?
Marco Gaido
Sara Papi
Matteo Negri
L. Bentivogli
41
13
0
19 Feb 2024
Both Matter: Enhancing the Emotional Intelligence of Large Language
  Models without Compromising the General Intelligence
Both Matter: Enhancing the Emotional Intelligence of Large Language Models without Compromising the General Intelligence
Weixiang Zhao
Zhuojun Li
Shilong Wang
Yang Wang
Yulin Hu
Yanyan Zhao
Chen Wei
Bing Qin
22
4
0
15 Feb 2024
Previous
123456789
Next