28

If Probable, Then Acceptable? Understanding Conditional Acceptability Judgments in Large Language Models

Main:8 Pages
13 Figures
Bibliography:3 Pages
20 Tables
Appendix:11 Pages
Abstract

Conditional acceptability refers to how plausible a conditional statement is perceived to be. It plays an important role in communication and reasoning, as it influences how individuals interpret implications, assess arguments, and make decisions based on hypothetical scenarios. When humans evaluate how acceptable a conditional "If A, then B" is, their judgments are influenced by two main factors: the conditional probability\textit{conditional probability} of BB given AA, and the semantic relevance\textit{semantic relevance} of the antecedent AA given the consequent BB (i.e., whether AA meaningfully supports BB). While prior work has examined how large language models (LLMs) draw inferences about conditional statements, it remains unclear how these models judge the acceptability\textit{acceptability} of such statements. To address this gap, we present a comprehensive study of LLMs' conditional acceptability judgments across different model families, sizes, and prompting strategies. Using linear mixed-effects models and ANOVA tests, we find that models are sensitive to both conditional probability and semantic relevance-though to varying degrees depending on architecture and prompting style. A comparison with human data reveals that while LLMs incorporate probabilistic and semantic cues, they do so less consistently than humans. Notably, larger models do not necessarily align more closely with human judgments.

View on arXiv
Comments on this paper