ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.04203
41
7

MMRole: A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents

8 August 2024
Yanqi Dai
Huanran Hu
Lei Wang
Shengjie Jin
X. Chen
Zhiwu Lu
    LLMAG
ArXivPDFHTML
Abstract

Recently, Role-Playing Agents (RPAs) have garnered increasing attention for their potential to deliver emotional value and facilitate sociological research. However, existing studies are primarily confined to the textual modality, unable to simulate humans' multimodal perceptual capabilities. To bridge this gap, we introduce the concept of Multimodal Role-Playing Agents (MRPAs), and propose a comprehensive framework, MMRole, for their development and evaluation, which comprises a personalized multimodal dataset and a robust evaluation approach. Specifically, we construct a large-scale, high-quality dataset, MMRole-Data, consisting of 85 characters, 11K images, and 14K single or multi-turn dialogues. Additionally, we present a robust evaluation approach, MMRole-Eval, encompassing eight metrics across three dimensions, where a reward model is designed to score MRPAs with the constructed ground-truth data for comparison. Moreover, we develop the first specialized MRPA, MMRole-Agent. Extensive evaluation results demonstrate the improved performance of MMRole-Agent and highlight the primary challenges in developing MRPAs, emphasizing the need for enhanced multimodal understanding and role-playing consistency. The data, code, and models are all available atthis https URL.

View on arXiv
@article{dai2025_2408.04203,
  title={ MMRole: A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents },
  author={ Yanqi Dai and Huanran Hu and Lei Wang and Shengjie Jin and Xu Chen and Zhiwu Lu },
  journal={arXiv preprint arXiv:2408.04203},
  year={ 2025 }
}
Comments on this paper