Consumer Responses to Algorithmic Decisions
Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Consumer Responses to Algorithmic Decisions

Abstract

It has been long acknowledged that computational prediction procedures may yield more accurate predictions than human judges. Nevertheless, people are often algorithm averse, that is, they are less willing to rely on algorithms than humans in tasks such as forecasting. Previous research on algorithm aversion has largely examined algorithmic forecasts or recommenders and not algorithmic decisions. This dissertation explores an uninvestigated facet of algorithm aversion: consumer attitudes and behavior regarding decisions that algorithms make on their behalf. Consumer responses to technology that performs its operations without any human involvement and is autonomous has been recognized as an important construct that needs to be studied in consumer research. As algorithms are increasingly becoming autonomous decision-makers, it is crucial to study how consumers perceive and react to algorithmic decisions. Chapter 1 encompasses five pre-registered studies (combined N = 2,535) conducted across diverse digital domains. It highlights consumers’ divergent conceptualizations of human and algorithmic decisions and suggests that consumers perceive algorithms as black boxes, whereas they perceive humans as more transparent. The lower satisfaction with algorithmic decisions is accounted for by lower trust in algorithms, which results from consumers’ perception that the algorithm’s decision is less transparent relative to human decisions. I find that increased input explainability (i.e., the consumer’s ability to access relevant input information regarding a particular decision) is an effective intervention to increase transparency and trust, leading to higher consumer satisfaction with algorithmic decisions. Chapter 2 investigates consumers’ perceptions of bias in algorithms and humans. The findings of four studies (combined N = 3,121) demonstrate a “bias tolerance” phenomenon, i.e., people acknowledge but disregard human bias and trust human decisions more than algorithmic ones. Algorithmic decisions are perceived as less biased, but paradoxically as less trustworthy and satisfactory than human decisions. This is because the negative effect of human (vs. algorithm) bias on satisfaction is less than the positive effect emotionality has on satisfaction. I find boundary conditions for bias tolerance in tasks (material purchases and data handling) where human emotionality and bias is impertinent. Across two chapters, this dissertation contributes substantively and theoretically to our comprehension of how consumers’ divergent conceptualization of human and algorithmic decision processes influences their responses to those decisions. As algorithms are increasingly becoming autonomous decision-makers, understanding how consumers perceive and react to algorithmic decisions can allow us to determine methods, such as input explainability, for a more satisfactory consumer-algorithm interaction.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View