34
0

Modeling Subjectivity in Cognitive Appraisal with Language Models

Abstract

As the utilization of language models in interdisciplinary, human-centered studies grow, the expectation of model capabilities continues to evolve. Beyond excelling at conventional tasks, models are recently expected to perform well on user-centric measurements involving confidence and human (dis)agreement -- factors that reflect subjective preferences. While modeling of subjectivity plays an essential role in cognitive science and has been extensively studied, it remains under-explored within the NLP community. In light of this gap, we explore how language models can harness subjectivity by conducting comprehensive experiments and analysis across various scenarios using both fine-tuned models and prompt-based large language models (LLMs). Our quantitative and qualitative experimental results indicate that existing post-hoc calibration approaches often fail to produce satisfactory results. However, our findings reveal that personality traits and demographical information are critical for measuring subjectivity. Furthermore, our in-depth analysis offers valuable insights for future research and development in the interdisciplinary studies of NLP and cognitive science.

View on arXiv
@article{zhou2025_2503.11381,
  title={ Modeling Subjectivity in Cognitive Appraisal with Language Models },
  author={ Yuxiang Zhou and Hainiu Xu and Desmond C. Ong and Petr Slovak and Yulan He },
  journal={arXiv preprint arXiv:2503.11381},
  year={ 2025 }
}
Comments on this paper