Skip to main content
eScholarship
Open Access Publications from the University of California

Compositional generalization in multi-armed bandits

  • Author(s): Saanum, Tankred;
  • Schulz, Eric;
  • Speekenbrink, Maarten
  • et al.
Abstract

To what extent do human reward learning and decision-making rely on the ability to represent and generate richly structured relationships between options? We provide evidence that structure learning and the principle of compositionality play crucial roles in human reinforcement learning. In a new multi-armed bandit paradigm, we found evidence that participants are able to learn representations of different reward structures and combine them to make correct generalizations about options in novel contexts. Moreover, we found substantial evidence that participants transferred knowledge of simpler reward structures to make compositional generalizations about rewards in complex contexts. This allowed participants to accumulate more rewards earlier, and to explore less whenever such knowledge transfer was possible. We also provide a computational model which is able to generalize and compose knowledge for complex reward structures. This model describes participant behaviour in the compositional generalization task better than various other models of decision-making and transfer learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View