Skip to main content
eScholarship
Open Access Publications from the University of California

Trusting algorithms: performance, explanations, and sticky preferences

Creative Commons 'BY' version 4.0 license
Abstract

What information guides individuals to trust an algorithm? We examine this question across three experiments that consistently found explanations and relative performance information increased trust in an algorithm relative to a human expert. Strikingly however, in only 23% of responses (414/1800) did an individual’s preferred agent for a task (e.g., driving a car) change from human to algorithm. Thus, initial preferences were ‘sticky’ and largely resistant to large shifts in trust. We discuss theoretical and practical implications of this work and identify important contributions to our understanding of how summaries of information can improve people’s willingness to trust decision aid algorithms.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View