Skip to main content
eScholarship
Open Access Publications from the University of California

Should Moral Decisions be Different for Human and Artificial Cognitive Agents?

Abstract

Moral judgments are elicited using dilemmas presentinghypothetical situations in which an agent must choosebetween letting several people die or sacrificing one person inorder to save them. The evaluation of the action or inaction ofa human agent is compared to those of two artificial agents –a humanoid robot and an automated system. Ratings ofrightness, blamefulness and moral permissibility of action orinaction in incidental and instrumental moral dilemmas areused. The results show that for the artificial cognitive agentsthe utilitarian action is rated as more morally permissible thaninaction. The humanoid robot is found to be less blameworthyfor his choices compared to the human agent or to theautomated system. Action is found to be more appropriate,morally permissible, more right, and less blameworthy thaninaction only for the incidental scenarios. The results areinterpreted and discussed from the perspective of perceivedmoral agency.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View