- Main
What sort of explanation should we expect from the algorithmic decision-making system?
Abstract
The requirements of transparency or explainability draw considerable attention in AI ethics. Still, it is not clear what it is for, whom AI is explainable to, what kind of explanation is demanded. First, I take the principle of explainability to state that there is a prima facie duty to make AI explainable when used in morally significant situations. Second, I show that explainability has a dual nature. Most of the existing literature is based on the unjustified assumption that there is only one purpose for explainability and one kind of explanation should be given to end-users. I argue, however, it is directed toward both decision-makers for their control to decide and decision-recipients for their trust in the algorithmic decisions. Consequently, different explications need to be given to different stakeholders for different purposes.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-