Skip to main content
eScholarship
Open Access Publications from the University of California

Learning Causal Overhypotheses through Exploration in Children and Computational Models

Abstract

Human children are proficient explorers, using causal information to great benefit. In contrast, typical AI agents do not consider underlying causal structures during exploration. To improve our understanding of the differences between children and agents—and ultimately to improve AI agents’ performance—we designed a virtual Blicket experiment to test childrens’ ability to leverage causal information while exploring a novel environment. This experiment doubles as an RL environment with a controllable causal structure, allowing us to evaluate exploration strategies used by both agents and children. Our results demonstrate that there are significant differences between information-gain optimal RL exploration and the exploration of children: in particular, children appear to consider a wide range of creative overhypotheses, including stochasticity, total weight, object ordering, and more. We leverage this new insight to lay the groundwork for future research into efficient exploration and disambiguation of causal structures for RL algorithms.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View