Advances In Explainable Artificial Intelligence, Fair Machine Learning, And The Intersections Thereof
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

Advances In Explainable Artificial Intelligence, Fair Machine Learning, And The Intersections Thereof

Abstract

Artificial intelligence (AI), if used correctly, has the capacity to improve human life by automating procedures that previously required human expertise and precision, particularly those that may have a great impact on people’s lives and where the cost of a mistake is high. Unfortunately, the use of machine learning (ML) algorithms carries with it certain risks that may limit their applicability in such sensitive domains. Particularly, ML algorithms solve tasks by optimizing a complex non-linear mapping between an input and output space. While the automated process of tuning this function is powerful, it ultimately renders these learners uninterpretable and subject to error, misuse, or harmful bias.The fields of explainable artificial intelligence (XAI) and fair machine learning exist to combat these issues. XAI seeks to explain how ML agents operate in human-interpretable terms, while fairness aims to correct or avoid potential unfair outcomes. While existing work has laid promising groundwork toward these ends, there are several limitations in both domains that should be rectified before AI can be trusted for particularly sensitive tasks. This dissertation aims to extend XAI and fair machine learning by making headway on these limitations. For XAI, we create approaches that explain the entire model, not just individual actions, we develop techniques tailored towards ML tasks beyond supervised learning, and we examine alternatives to input space as the means of providing that explanation. For fairness, we look to the literature in social sciences to create fair ML algorithms that match the models of how unfairness and discrimination occur, which we argue are superior to existing techniques that do not leverage this theory. Finally, we introduce the novel concept of machine-to-machine explanation: the idea that explanation technology can be used for additional computational tasks, enabling collaboration among ML models to improve their performance.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View