400

Position: Simulating Society Requires Simulating Thought

Main:13 Pages
2 Figures
Bibliography:3 Pages
1 Tables
Appendix:1 Pages
Abstract

Simulating society with large language models (LLMs), we argue, requires more than generating plausible behavior -- it demands cognitively grounded reasoning that is structured, revisable, and traceable. LLM-based agents are increasingly used to emulate individual and group behavior -- primarily through prompting and supervised fine-tuning. Yet they often lack internal coherence, causal reasoning, and belief traceability -- making them unreliable for analyzing how people reason, deliberate, or respond to interventions.

View on arXiv
Comments on this paper