Skip to main content
eScholarship
Open Access Publications from the University of California

What makes a good explanation?Cognitive dimensions of explaining intelligent machines

Abstract

Explainability is assumed to be a key factor for theadoption of Artificial Intelligence systems in a wide rangeof contexts (Hoffman, Mueller, & Klein, 2017; Hoffman,Mueller, Klein, & Litman, 2018; Doran, Schulz, & Besold,2017; Lipton, 2018; Miller, 2017; Lombrozo, 2016).The use of AI components in self-driving cars, medicaldiagnosis, or insurance and financial services has shownthat when decisions are taken or suggested by automatedsystems it is essential for practical, social, and increasinglylegal reasons that an explanation can be provided tousers, developers or regulators.1Moreover, the reasons forequipping intelligent systems with explanation capabilitiesare not limited to user rights and acceptance. Explainabilityis also needed for designers and developers to enhancesystem robustness and enable diagnostics to prevent bias,unfairness and discrimination, as well as to increase trust byall users in why and how decisions are made. Against thatbackground, increased efforts are directed towards studyingand provisioning explainable intelligent systems, both inindustry and academia, sparked by initiatives like the DARPAExplainable Artificial Intelligence Program (DARPA, 2016).In parallel, scientific conferences and workshops dedicated toexplainability are now regularly organised, such as the ‘ACMConference on Fairness, Accountability, and Transparency(ACM FAT)’ (Friedler & Wilson, n.d.) or the ‘Workshop onExplainability in AI’ at the 2017 and 2018 editions of theInternational Joint Conference on Artificial Intelligence.However, one important question remains hithertounanswered: What are the criteria for a good explanation?

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View