How people detect incomplete explanations
Skip to main content
eScholarship
Open Access Publications from the University of California

How people detect incomplete explanations

Abstract

In theory, there exists no bound to a causal explanation – every explanation can be elaborated further. But reasoners rate some explanations as more complete than others. To account for this behavior, we developed a novel theory of the detection of explanatory incompleteness. The theory is based on the idea that reasoners construct mental models of causal explanations. By default, each causal relation refers to a single mental model. Reasoners should consider an explanation complete when they can construct a single mental model, but incomplete when they must consider multiple models. Reasoners should thus rate causal chains, e.g., A causes B and B causes C, as more complete than “common cause” explanations (e.g., A causes B and A causes C) or “common effect” explanations (e.g., A causes C and B causes C). Two experiments validate the theory's prediction. The data suggest that reasoners construct mental models when generating explanations.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View