Skip to main content
eScholarship
Open Access Publications from the University of California

Action decision congruence between human and deep reinforcement learning agents during a coordinated action task

Abstract

Deep reinforcement learning (DRL) is capable of training agents that exceed human-levels of performance in multi-agent tasks. However, the behaviors exhibited by these agents are not guaranteed to be human-like or human-compatible. This poses a problem if the goal is to design agents capable of collaborating with humans or augmenting human actions in cooperative or team-based tasks. Indeed, recommender systems designed to augment human decision-making need to not only recommend actions that align with the task goal, but which also maintain coordinative behaviors between agents. The current study simultaneously explored skill learning performance of human learners when working alongside different artificial agents (AAs) during a collaborative problem-solving task, as well as evaluated the effectiveness of the same AAs as action decision recommender systems to aid learning. The action decisions of the AAs were either modelled by a heuristic model based on human performance or by a deep neural network trained by reinforcement learning using self-play. In addition to evaluating skill learning performance, the current study also tested the congruence between the decisions of the AAs with actual decisions made by humans. Results demonstrate that the performance of humans was significantly worse when working alongside the DRL AA compared to the heuristic AA. Additionally, the action decisions participants made also showed less allignment with the recommendations made by the DRL AA.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View