Skip to main content
eScholarship
Open Access Publications from the University of California

Experimental Investigation of Explanation Presentation for Visual Tasks with XAI

Abstract

Explainable AI (XAI) has been developed to make AI understandable to humans by offering explanations of its operations. However, too much explanation could lead to users experiencing cognitive overload and developing inappropriate trust in AI. To investigate the appropriate amount of explanation, we examined the influence of explanation type on trust in AI using a classic visual search task in Experiment 1, and the influence of adapted explanation presentation on task performance using a practical visual identification task in Experiment 2. The results showed that AI results displayed alone increased trust and task performance in a low-complexity task, and displaying AI results with AI attention heatmaps (showing locations on which AI focused in task images) that had high interpretability increased trust and task performance in a high-complexity task. This study showed the importance of adjusting the amount of explanation for visual tasks with XAI.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View