9

AEQ-Bench: Measuring Empathy of Omni-Modal Large Models

Xuan Luo
Lewei Yao
Libo Zhao
Lanqing Hong
Kai Chen
Dehua Tao
Daxin Tan
Ruifeng Xu
Jing Li
Main:7 Pages
7 Figures
Bibliography:3 Pages
16 Tables
Appendix:7 Pages
Abstract

While the automatic evaluation of omni-modal large models (OLMs) is essential, assessing empathy remains a significant challenge due to its inherent affectivity. To investigate this challenge, we introduce AEQ-Bench (Audio Empathy Quotient Benchmark), a novel benchmark to systematically assess two core empathetic capabilities of OLMs: (i) generating empathetic responses by comprehending affective cues from multi-modal inputs (audio + text), and (ii) judging the empathy of audio responses without relying on text transcription. Compared to existing benchmarks, AEQ-Bench incorporates two novel settings that vary in context specificity and speech tone. Comprehensive assessment across linguistic and paralinguistic metrics reveals that (1) OLMs trained with audio output capabilities generally outperformed models with text-only outputs, and (2) while OLMs align with human judgments for coarse-grained quality assessment, they remain unreliable for evaluating fine-grained paralinguistic expressiveness.

View on arXiv
Comments on this paper