ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.10215
85
0

Do Large Language Models Reason Causally Like Us? Even Better?

14 February 2025
Hanna M. Dettki
Brenden M. Lake
Charley M. Wu
Bob Rehder
    ReLM
    ELM
    LRM
ArXivPDFHTML
Abstract

Causal reasoning is a core component of intelligence. Large language models (LLMs) have shown impressive capabilities in generating human-like text, raising questions about whether their responses reflect true understanding or statistical patterns. We compared causal reasoning in humans and four LLMs using tasks based on collider graphs, rating the likelihood of a query variable occurring given evidence from other variables. We find that LLMs reason causally along a spectrum from human-like to normative inference, with alignment shifting based on model, context, and task. Overall, GPT-4o and Claude showed the most normative behavior, including "explaining away", whereas Gemini-Pro and GPT-3.5 did not. Although all agents deviated from the expected independence of causes - Claude the least - they exhibited strong associative reasoning and predictive inference when assessing the likelihood of the effect given its causes. These findings underscore the need to assess AI biases as they increasingly assist human decision-making.

View on arXiv
@article{dettki2025_2502.10215,
  title={ Do Large Language Models Reason Causally Like Us? Even Better? },
  author={ Hanna M. Dettki and Brenden M. Lake and Charley M. Wu and Bob Rehder },
  journal={arXiv preprint arXiv:2502.10215},
  year={ 2025 }
}
Comments on this paper