Skip to main content
eScholarship
Open Access Publications from the University of California

Using Recurrent Neural Networks to Understand Human Reward Learning

Abstract

Computational models are greatly useful in cognitive science in revealing the mechanisms of learning and decision making. However, it is hard to know whether all meaningful variance in behavior has been account for by the best-fit model selected through model comparison. In this work, we propose to use recurrent neural networks (RNNs) to assess the limits of predictability afforded by a model of behavior, and reveal what (if anything) is missing in the cognitive models. We apply this approach in a complex reward-learning task with a large choice space and rich individual variability. The RNN models outperform the best known cognitive model through the entire learning phase. By analyzing and comparing model predictions, we show that the RNN models are more accurate at capturing the temporal dependency between subsequent choices, and better at identifying the subspace in the space of choices where participants' behavior is more likely to reside. The RNNs can also capture individual differences across participants by utilizing an embedding. The usefulness of this approach suggests promising applications of using RNNs to predict human behavior in complex cognitive tasks, in order to reveal cognitive mechanisms and their variability.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View