Skip to main content
Open Access Publications from the University of California

Creating a Tool to Reproducibly Estimate the Ethical Impact of Artificial Intelligence

  • Author(s): Jordan, Sara
  • Fazelpour, Sina
  • Koshiyama, Adriano
  • Kueper, Jaky
  • DeChant, Chad
  • Leong, Brenda
  • Marchant, Gary
  • Shank, Craig
  • et al.

How can an organization systematically and reproducibly measure the ethical impact of its AI-enabled platforms? Organizations that create applications enhanced by artificial intelligence and machine learning (AI/ML) are increasingly asked to review the ethical impact of their work. Governance and oversight organizations are increasingly asked to provide documentation to guide the conduct of ethical impact assessments. This document outlines a draft procedure for organizations to evaluate the ethical impacts of their work. We propose that ethical impact can be evaluated via a principles-based approach when the effects of platforms’ probable uses are interrogated through informative questions, with answers scaled and weighted to produce a multi-layered score. We initially assess ethical impact as the summed score of a project’s potential to protect human rights. However, we do not suggest that the ethical impact of platforms is assessed exclusively through preservation of human rights alone, a decidedly difficult concept to measure. Instead, we propose that ethical impact can be measured through a similar procedure assessing conformity with other important principles such as: protection of decisional autonomy, explainability, reduction of bias, assurances of algorithmic competence, or safety. In this initial draft paper, we demonstrate the application of our method for ethical impact assessment to the principles of human rights and bias.

Main Content
Current View