37

Emergence of Hierarchical Emotion Organization in Large Language Models

Bo Zhao
Maya Okawa
Eric J. Bigelow
Rose Yu
Tomer Ullman
Ekdeep Singh Lubana
Hidenori Tanaka
Main:8 Pages
26 Figures
Bibliography:5 Pages
3 Tables
Appendix:11 Pages
Abstract

As large language models (LLMs) increasingly power conversational agents, understanding how they model users' emotional states is critical for ethical deployment. Inspired by emotion wheels -- a psychological framework that argues emotions organize hierarchically -- we analyze probabilistic dependencies between emotional states in model outputs. We find that LLMs naturally form hierarchical emotion trees that align with human psychological models, and larger models develop more complex hierarchies. We also uncover systematic biases in emotion recognition across socioeconomic personas, with compounding misclassifications for intersectional, underrepresented groups. Human studies reveal striking parallels, suggesting that LLMs internalize aspects of social perception. Beyond highlighting emergent emotional reasoning in LLMs, our results hint at the potential of using cognitively-grounded theories for developing better model evaluations.

View on arXiv
Comments on this paper