Behavioral Economics of AI: LLM Biases and Corrections
Social Science Research Network (SSRN), 2026
Pietro Bini
Lin William Cong
Xing Huang
Lawrence J. Jin
Main:4 Pages
25 Figures
Bibliography:2 Pages
12 Tables
Appendix:63 Pages
Abstract
Do generative AI models, particularly large language models (LLMs), exhibit systematic behavioral biases in economic and financial decisions? If so, how can these biases be mitigated? Drawing on the cognitive psychology and experimental economics literatures, we conduct the most comprehensive set of experiments to dateoriginally designed to document human biaseson prominent LLM families across model versions and scales. We document systematic patterns in LLM behavior. In preference-based tasks, responses become more human-like as models become more advanced or larger, while in belief-based tasks, advanced large-scale models frequently generate rational responses. Prompting LLMs to make rational decisions reduces biases.
View on arXivComments on this paper
