Multimodal Coreference Resolution for Chinese Social Media Dialogues: Dataset and Benchmark Approach

Multimodal coreference resolution (MCR) aims to identify mentions referring to the same entity across different modalities, such as text and visuals, and is essential for understanding multimodal content. In the era of rapidly growing mutimodal content and social media, MCR is particularly crucial for interpreting user interactions and bridging text-visual references to improve communication and personalization. However, MCR research for real-world dialogues remains unexplored due to the lack of sufficient datathis http URLaddress this gap, we introduce TikTalkCoref, the first Chinese multimodal coreference dataset for social media in real-world scenarios, derived from the popular Douyin short-video platform. This dataset pairs short videos with corresponding textual dialogues from user comments and includes manually annotated coreference clusters for both person mentions in the text and the coreferential person head regions in the corresponding video frames. We also present an effective benchmark approach for MCR, focusing on the celebrity domain, and conduct extensive experiments on our dataset, providing reliable benchmark results for this newly constructed dataset. We will release the TikTalkCoref dataset to facilitate future research on MCR for real-world social media dialogues.
View on arXiv@article{li2025_2504.14321, title={ Multimodal Coreference Resolution for Chinese Social Media Dialogues: Dataset and Benchmark Approach }, author={ Xingyu Li and Chen Gong and Guohong Fu }, journal={arXiv preprint arXiv:2504.14321}, year={ 2025 } }