Skip to main content
eScholarship
Open Access Publications from the University of California

Intrinsic Rewards in Human Curiosity-Driven Exploration: An Empirical Study

Abstract

Despite their apparent importance for the acquisition of full-fledged human intelligence, mechanisms of intrinsically motivated autonomous learning are poorly understood. How do humans identify useful sources of knowledge and decide which learning situations to approach in the absence of external rewards? While the recognition of this important problem has grown in psychological sciences over the recent years, an intriguing proposition for the possible mechanism comes from artificial intelligence, where efficient autonomous learning is achieved by programming agents to follow the heuristic of maximizing learning progress (LP) during exploration. In this study, we set out to examine the empirical evidence for this idea. Using computational modeling, we demonstrate that humans show signs of following LP while they freely explore and practice a set of multiple learning activities of varying difficulty, including an activity that is impossible to learn. Different approaches to operationalizing the notion of LP and their plausibility in light of empirical data are also discussed. We also show that models combining several types of intrinsic rewards fit better human exploration data than single component models considered so far in theoretical accounts.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View