Skip to main content
eScholarship
Open Access Publications from the University of California

From Shortcut to Sleight of Hand: Why the Checklist Approach in the EU Guidelines Does Not Work

Abstract

In April 2019, the High-Level Expert Group on Artificial Intelligence (AI) nominated by the EU Commission presented “Ethics Guidelines for Trustworthy Artificial Intelligence,” followed in June 2019 by a second “Policy and investment recommendations” Document.

The Guidelines establish three characteristics (lawful, ethical, and robust) and seven key requirements (Human agency and oversight; Technical Robustness and safety; Privacy and data governance; Transparency; Diversity, non-discrimination and fairness; Societal and environmental well-being; and Accountability) that the development of AI should follow.

The Guidelines are of utmost significance for the international debate over the regulation of AI. Firstly, they aspire to set a universal standard of care for the development of AI in the future. Secondly, they have been developed within a group of experts nominated by a regulatory body, and therefore will shape the normative approach in the EU regulation of AI and in its interaction with foreign countries. As the GDPR has shown, the effect of this normative activity goes way past the European Union territory.

One of the most debated aspects of the Guidelines was the need to find an objective methodology to evaluate conformity with the key requirements. For this purpose, the Expert Group drafted an “assessment checklist” in the last part of the document: the list is supposed to be incorporated into existing practices, as a way for technology developers to consider relevant ethical issues and create more “trustworthy” AI. Our group undertook a critical assessment of the proposed tool from a multidisciplinary perspective, to assess its implications and limitations for global AI development.

Item not freely available? Link broken?
Report a problem accessing this item