Skip to main content
eScholarship
Open Access Publications from the University of California

A Bayesian Multilevel Analysis of Belief Alignment Effect Predicting Human Moral Intuitions of Artificial Intelligence Judgements

Abstract

Despite substantial progress in artificial intelligence (AI), little is known about people’s moral intuitions towards AI systems. Given that politico-moral intuitions often influence judgements in non-rational ways, we investigated participants’ willingness to act on verdicts provided by an expert AI system, trust in AI, and perceived fairness of AI as a function of the AI system’s (dis)agreement with their pre-existing politico-moral beliefs across various morally contentious issues. Results show belief alignment triggered a willingness to act on AI verdicts but did not increase trust or fairness perception of the AI. This result was unaffected by general AI attitudes. Our findings suggest a disassociation between acceptance of AI recommendations and judgements of trust/fairness of the AI, and that such acceptance is partly driven by alignment with pre-existing intuitions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View