Skip to main content
eScholarship
Open Access Publications from the University of California

The Quantified Moral Self

Creative Commons 'BY' version 4.0 license
Abstract

Artificial Intelligence (AI) can be harnessed to create sophisticated social and moral scoring systems – enabling people and organisations to form judgements of others at scale. While this capability has many useful applications – e.g., matching romantic partners who are aligned in their moral principles, it also raises many ethical questions. For example, there is widespread concern about the use of social credit systems in the political domain. In this project, we approach this topic from a psychological perspective. With experimental evidence, we show that the acceptability of moral scoring by AI depends on its perceived accuracy, and that perceived accuracy is compromised by people's tendency to see themselves as morally peculiar, and thus less characterizable by AI. That is, we suggest that people overestimate the peculiarity of their moral profile, believe that AI will neglect this peculiarity, and resist for this reason the introduction of moral scoring by AI.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View