0

EEmo-Logic: A Unified Dataset and Multi-Stage Framework for Comprehensive Image-Evoked Emotion Assessment

Lancheng Gao
Ziheng Jia
Zixuan Xing
Wei Sun
Huiyu Duan
Guangtao Zhai
Xiongkuo Min
Abstract

Understanding the multi-dimensional attributes and intensity nuances of image-evoked emotions is pivotal for advancing machine empathy and empowering diverse human-computer interaction applications. However, existing models are still limited to coarse-grained emotion perception or deficient reasoning capabilities. To bridge this gap, we introduce EEmoDB, the largest image-evoked emotion understanding dataset to date. It features 55 analysis dimensions spanning 55 distinct task categories, facilitating comprehensive interpretation. Specifically, we compile 1.2M1.2M question-answering (QA) pairs (EEmoDB-QA) from 125k125k images via automated generation, alongside a 36k36k dataset (EEmoDB-Assess) curated from 25k25k images for fine-grained assessment. Furthermore, we propose EEmo-Logic, an all-in-one multimodal large language model (MLLM) developed via instruction fine-tuning and task-customized group relative preference optimization (GRPO) with novel reward design. Extensive experiments demonstrate that EEmo-Logic achieves robust performance in in-domain and cross-domain datasets, excelling in emotion QA and fine-grained assessment. The code is available atthis https URL.

View on arXiv
Comments on this paper