This dissertation explores how reported human trust in autonomous agents changes over time. Given a task with a shared goal, a human learns information about their collaborating partner through many factors. Autonomous systems can query this information through a measure of human trust. Discovering how much a human trusts an autonomous agent sheds light on how they may interact with them in future interactions, opening an avenue for anticipatory action planning.
We explore several questions in this work, such as: How does human trust evolve as a human interacts with an agent that changes strategy and capability? Can a pared-down anytime trust measurement survey capture trusting events? How may future autonomous systems enable decision-making in isolated space environments?
To answer the first question, we built a web browser game and infrastructure to administer a collaborative visual search task. Human subjects controlled a spotlight cursor to search a 2-dimensional grid of possible points of interest while an autonomous agent simultaneously searched the same area. The autonomous agent used three different search strategies and had three different capabilities. These combinations were presented to subjects with different ordering schemes. We found that subjects' trust in the autonomous agents evolved differently depending on the order.
Next, in a remote supervision task, subjects used a web browser to interact with a simulated robot arm on a simulated version of the International Space Station. Subjects were instructed to sort stowage to an assigned configuration using the robot and rate their trust whenever they felt their trust in the robot had changed. We found that subjects rated their trust shortly following a stowage placement, indicating they used information from the task outcome to rate their trust. This context enabled an anytime trust slider to identify a factor a subject used to rate their trust without a lengthy, repetitive survey.