- Main
Explaining Machine Learned Relational Concepts in Visual Domains - Effects of Perceived Accuracy on Joint Performance and Trust
Abstract
Most machine learning based decision support systems are black box models that are not interpretable for humans. However, the demand for explainable models to create comprehensible and trustworthy systems is growing, particularly in complex domains involving risky decisions. In many domains, decision making is based on visual information. We argue that nevertheless, explanations need to be verbal to communicate the relevance of specific feature values and critical relations for a classification decision. To address that claim, we introduce a fictitious visual domain from archeology where aerial views of ancient grave sites must be classified. Trustworthiness among other factors relies on the perceived or assumed correctness of a system's decisions. Models learned by induction of data, in general, cannot have perfect predictive accuracy and one can assume that unexplained erroneous system decisions might reduce trust. In a 2×2 factorial online experiment with 190 participants, we investigated the effect of verbal explanations and information about system errors. Our results show that explanations increase comprehension of the factors on which classification of grave sites is based and that explanations increase the joint performance of human and system for new decision tasks. Furthermore, explanations result in more confidence in decision making and higher trust in the system.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-