Skip to main content
eScholarship
Open Access Publications from the University of California

Uncertainty can explain apparent mistakes in causal reasoning

Creative Commons 'BY' version 4.0 license
Abstract

Humans excel at causal reasoning, yet at the same time consistently fail to respect its basic axioms. They seemingly fail to recognize, for instance, that only the direct causes of an event can affect its probability (the Markov condition). How can one explain this paradox? Here we argue that standard normative analyses of causal reasoning mostly apply to the idealized case where the reasoner has perfect confidence in her knowledge of the underlying causal model. Given uncertainty about the correct representation of a causal system, it is not always rational for a reasoner to respect the Markov condition and other ‘normative’ principles. To test whether uncertainty can account for the apparent fallibility of human judgments, we formulate a simple computational model of a rational-but-uncertain causal reasoner. In a re-analysis of a recent causal reasoning study, the model fits the data significantly better than its standard normative counterpart.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View