13
0

Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists?

Abstract

Language model (LM) agents are increasingly used as autonomous decision-makers who need to actively gather information to guide their decisions. A crucial cognitive skill for such agents is the efficient exploration and understanding of the causal structure of the world -- key to robust, scientifically grounded reasoning. Yet, it remains unclear whether LMs possess this capability or exhibit systematic biases leading to erroneous conclusions. In this work, we examine LMs' ability to explore and infer causal relationships, using the well-established "Blicket Test" paradigm from developmental psychology. We find that LMs reliably infer the common, intuitive disjunctive causal relationships but systematically struggle with the unusual, yet equally (or sometimes even more) evidenced conjunctive ones. This "disjunctive bias" persists across model families, sizes, and prompting strategies, and performance further declines as task complexity increases. Interestingly, an analogous bias appears in human adults, suggesting that LMs may have inherited deep-seated reasoning heuristics from their training data. To this end, we quantify similarities between LMs and humans, finding that LMs exhibit adult-like inference profiles (but not children-like). Finally, we propose a test-time sampling method which explicitly samples and eliminates hypotheses about causal relationships from the LM. This scalable approach significantly reduces the disjunctive bias and moves LMs closer to the goal of scientific, causally rigorous reasoning.

View on arXiv
@article{gx-chen2025_2505.09614,
  title={ Language Agents Mirror Human Causal Reasoning Biases. How Can We Help Them Think Like Scientists? },
  author={ Anthony GX-Chen and Dongyan Lin and Mandana Samiei and Doina Precup and Blake A. Richards and Rob Fergus and Kenneth Marino },
  journal={arXiv preprint arXiv:2505.09614},
  year={ 2025 }
}
Comments on this paper