- Main
Explainable Artificial Intelligence for Graph Data
- Zhang, Shichang
- Advisor(s): Sun, Yizhou
Abstract
The development of artificial intelligence (AI) has significantly impacted our daily lives and even driven new scientific discoveries. However, the modern AI models based on deep learning remain opaque ``black boxes'' and raise a critical ``why question'' – why are these AI models capable of achieving such remarkable outcomes? Answering this question leads to research on Explainable AI (XAI), which offers numerous benefits, such as enhancing model performance, establishing user trust, and extracting deeper insights from data. While XAI has been explored for some data modalities like images and text, relevant research on graph data, a more complex data modality that represents both entities and their relationships, is underdeveloped. Given the ubiquity of graph data and their prevalent applications across main domains including science, business, and healthcare, XAI for graph data becomes a critical research direction.
This thesis aims to address the gap in XAI for graph data from three complementary and equally important perspectives: model, user, and data. Accordingly, my research advances XAI for graph data by developing: (1) Model-oriented explanation techniques that illuminate the mechanism and enhance the performance of state-of-the-art AI models on graph data. (2) User-oriented explanation approaches that offer intuitive visualizations and natural language explanations to establish user trust in graph AI models for real-world applications. (3) Data-oriented explanation methods that identify key patterns and extract insights from graph data, potentially leading to new scientific discoveries. By integrating these three perspectives, this thesis enhances the transparency, trustworthiness, and insightfulness of AI for graph data across domains and applications.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-