Skip to main content
eScholarship
Open Access Publications from the University of California

Human control redressed: Comparing AI and human predictability in a real-effort task

Abstract

Predictability is a prerequisite for effective human control of artificial intelligence (AI). For example, the inability to predict the malfunctioning of AI impedes timely human intervention. In this paper, we employ a computerized navigation task, a lunar lander game, to investigate empirically how AI's predictability compares to humans' predictability. To this end, we ask participants to guess whether the landings of a spaceship in this game performed by AI and humans will succeed. We show that humans are worse at predicting AI performance than at predicting human performance in this environment. Significantly, participants underestimate the differences in the relative predictability of AI and, at times, overestimate their prediction skills. We link the difference in predictability to differences in the approaches, i.e. different landing patterns, employed by AI and humans to succeed in the task. These results highlight important differences in perception of AI and human with implications for human-computer interaction.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View