28
0

Benchmarking the rationality of AI decision making using the transitivity axiom

Abstract

Fundamental choice axioms, such as transitivity of preference, provide testable conditions for determining whether human decision making is rational, i.e., consistent with a utility representation. Recent work has demonstrated that AI systems trained on human data can exhibit similar reasoning biases as humans and that AI can, in turn, bias human judgments through AI recommendation systems. We evaluate the rationality of AI responses via a series of choice experiments designed to evaluate transitivity of preference in humans. We considered ten versions of Meta's Llama 2 and 3 LLM models. We applied Bayesian model selection to evaluate whether these AI-generated choices violated two prominent models of transitivity. We found that the Llama 2 and 3 models generally satisfied transitivity, but when violations did occur, occurred only in the Chat/Instruct versions of the LLMs. We argue that rationality axioms, such as transitivity of preference, can be useful for evaluating and benchmarking the quality of AI-generated responses and provide a foundation for understanding computational rationality in AI systems more generally.

View on arXiv
@article{song2025_2502.10554,
  title={ Benchmarking the rationality of AI decision making using the transitivity axiom },
  author={ Kiwon Song and James M. Jennings III and Clintin P. Davis-Stober },
  journal={arXiv preprint arXiv:2502.10554},
  year={ 2025 }
}
Comments on this paper