ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.02589
69
0

MCiteBench: A Benchmark for Multimodal Citation Text Generation in MLLMs

4 March 2025
Caiyu Hu
Yikai Zhang
Tinghui Zhu
Yiwei Ye
Yanghua Xiao
ArXivPDFHTML
Abstract

Multimodal Large Language Models (MLLMs) have advanced in integrating diverse modalities but frequently suffer from hallucination. A promising solution to mitigate this issue is to generate text with citations, providing a transparent chain for verification. However, existing work primarily focuses on generating citations for text-only content, overlooking the challenges and opportunities of multimodal contexts. To address this gap, we introduce MCiteBench, the first benchmark designed to evaluate and analyze the multimodal citation text generation ability of MLLMs. Our benchmark comprises data derived from academic papers and review-rebuttal interactions, featuring diverse information sources and multimodal content. We comprehensively evaluate models from multiple dimensions, including citation quality, source reliability, and answer accuracy. Through extensive experiments, we observe that MLLMs struggle with multimodal citation text generation. We also conduct deep analyses of models' performance, revealing that the bottleneck lies in attributing the correct sources rather than understanding the multimodal content.

View on arXiv
@article{hu2025_2503.02589,
  title={ MCiteBench: A Benchmark for Multimodal Citation Text Generation in MLLMs },
  author={ Caiyu Hu and Yikai Zhang and Tinghui Zhu and Yiwei Ye and Yanghua Xiao },
  journal={arXiv preprint arXiv:2503.02589},
  year={ 2025 }
}
Comments on this paper