Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models

Assessing how well a large language model (LLM) understands human, rather than merely text, remains an open challenge. To bridge the gap, we introduce Sentient Agent as a Judge (SAGE), an automated evaluation framework that measures an LLM's higher-order social cognition. SAGE instantiates a Sentient Agent that simulates human-like emotional changes and inner thoughts during interaction, providing a more realistic evaluation of the tested model in multi-turn conversations. At every turn, the agent reasons about (i) how its emotion changes, (ii) how it feels, and (iii) how it should reply, yielding a numerical emotion trajectory and interpretable inner thoughts. Experiments on 100 supportive-dialogue scenarios show that the final Sentient emotion score correlates strongly with Barrett-Lennard Relationship Inventory (BLRI) ratings and utterance-level empathy metrics, validating psychological fidelity. We also build a public Sentient Leaderboard covering 18 commercial and open-source models that uncovers substantial gaps (up to 4x) between frontier systems (GPT-4o-Latest, Gemini2.5-Pro) and earlier baselines, gaps not reflected in conventional leaderboards (e.g., Arena). SAGE thus provides a principled, scalable and interpretable tool for tracking progress toward genuinely empathetic and socially adept language agents.
View on arXiv@article{zhang2025_2505.02847, title={ Sentient Agent as a Judge: Evaluating Higher-Order Social Cognition in Large Language Models }, author={ Bang Zhang and Ruotian Ma and Qingxuan Jiang and Peisong Wang and Jiaqi Chen and Zheng Xie and Xingyu Chen and Yue Wang and Fanghua Ye and Jian Li and Yifan Yang and Zhaopeng Tu and Xiaolong Li }, journal={arXiv preprint arXiv:2505.02847}, year={ 2025 } }