Skip to main content
eScholarship
Open Access Publications from the University of California

Department of Statistics, UCLA

Department of Statistics Papers bannerUCLA

Bayesian Generic Priors for Causal Learning

Abstract

The article presents a Bayesian model of causal learning that incorporates generic priors—systematic assumptions about abstract properties of a system of cause– effect relations. The proposed generic priors for causal learning favor sparse and strong (SS) causes—causes that are few in number and high in their individual powers to produce or prevent effects. The SS power model couples these generic priors with a causal generating function based on the assumption that unobservable causal influences on an effect operate independently (P. W. Cheng, 1997). The authors tested this and other Bayesian models, as well as leading nonnormative models, by fitting multiple data sets in which several parameters were varied parametrically across multiple types of judgments. The SS power model accounted for data concerning judgments of both causal strength and causal structure (whether a causal link exists). The model explains why human judgments of causal structure (relative to a Bayesian model lacking these generic priors) are influenced more by causal power and the base rate of the effect and less by sample size. Broader implications of the Bayesian framework for human learning are discussed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View