- Main
Cognitive Differences in Human and AI Explanation
Abstract
How do humans explain and cognize visual information? Why do AI explanations in radiology, despite their remarkable accuracy, fail to gain human trust? In a study of 13 radiology practitioners, we found that AI explanations of x-rays differ from human explanations in 3 ways. The first concerns visual reasoning and evidence: how humans get other humans to see an interpretation’s validity. Machine learned classifications lack this evidentiary grounding, and consequently XAI explanations like heat maps fail to meet many users' needs. The second concerns the varying needs of interlocutors. Predictably, explanations suitable for experts and novices differ; presuppositions on explainee knowledge and goals inform explanation content. Pragmatics matter. The third difference concerns how linguistic terms and phrases are used to hedge uncertainty. There is no reason XAI might not satisfy these human requirements. To do so, however, will require deeper theories of human explanation.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-