Spontaneous Giving and Calculated Greed in Language Models

Large language models demonstrate advanced problem-solving capabilities by incorporating reasoning techniques such as chain of thought and reflection. However, how these reasoning capabilities extend to social intelligence remains unclear. In this study, we investigate this question using economic games that model social dilemmas, where social intelligence plays a crucial role. First, we examine the effects of chain-of-thought and reflection techniques in a public goods game. We then extend our analysis to six economic games on cooperation and punishment, comparing off-the-shelf non-reasoning and reasoning models. We find that reasoning models significantly reduce cooperation and norm enforcement, prioritizing individual rationality. Consequently, groups with more reasoning models exhibit less cooperation and lower gains through repeated interactions. These behaviors parallel human tendencies of "spontaneous giving and calculated greed." Our results suggest the need for AI architectures that incorporate social intelligence alongside reasoning capabilities to ensure that AI supports, rather than disrupts, human cooperative intuition.
View on arXiv@article{li2025_2502.17720, title={ Spontaneous Giving and Calculated Greed in Language Models }, author={ Yuxuan Li and Hirokazu Shirado }, journal={arXiv preprint arXiv:2502.17720}, year={ 2025 } }