Skip to main content
eScholarship
Open Access Publications from the University of California

Trust and Algorithmic Decision Making

Abstract

The acceleration and advancement of today’s technology has led to the growing use of machine learning algorithms in everyday life. Therefore, our collective trust in algorithmic decision making becomes increasingly important to consider. Current literature suggests that people may be skeptical of relying on algorithmic judgment rather than human judgment, regardless of performance quality or accuracy (Logg, 2018). However, conflicting results have arisen from previous studies regarding this algorithmic aversion or appreciation. An online experiment was conducted using a 2x2 design with 120 adult participants in order to examine how the control and risk environment of an algorithm’s decision making process affects human trust towards algorithmic decision making. Results indicate that humans are less trusting, or more averse, of automated systems in situations with higher perceived risk and lower human control. These findings shed light on the evolving relationship between humans and the automated systems we rely upon and have implications for the development and operation of automated systems we generate.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View