ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.07311
22
5

Do large language models and humans have similar behaviors in causal inference with script knowledge?

13 November 2023
Xudong Hong
Margarita Ryzhova
Daniel Adrian Biondi
Ram Sarkar
ArXivPDFHTML
Abstract

Recently, large pre-trained language models (LLMs) have demonstrated superior language understanding abilities, including zero-shot causal reasoning. However, it is unclear to what extent their capabilities are similar to human ones. We here study the processing of an event BBB in a script-based story, which causally depends on a previous event AAA. In our manipulation, event AAA is stated, negated, or omitted in an earlier section of the text. We first conducted a self-paced reading experiment, which showed that humans exhibit significantly longer reading times when causal conflicts exist (¬A→B\neg A \rightarrow B¬A→B) than under logical conditions (A→BA \rightarrow BA→B). However, reading times remain similar when cause A is not explicitly mentioned, indicating that humans can easily infer event B from their script knowledge. We then tested a variety of LLMs on the same data to check to what extent the models replicate human behavior. Our experiments show that 1) only recent LLMs, like GPT-3 or Vicuna, correlate with human behavior in the ¬A→B\neg A \rightarrow B¬A→B condition. 2) Despite this correlation, all models still fail to predict that nil→Bnil \rightarrow Bnil→B is less surprising than ¬A→B\neg A \rightarrow B¬A→B, indicating that LLMs still have difficulties integrating script knowledge. Our code and collected data set are available at https://github.com/tony-hong/causal-script.

View on arXiv
Comments on this paper