Graph data show relationships between entities in a variety of domains including but not limited to communication, social, and interaction networks. Representation learning makes graph data easier to use for graph tasks such as graph classification, link prediction, and clustering. The decisions on graphs depend on complex patterns combining rich structural and attribute data. Therefore, explaining these decisions made by representation learning models for high-stakes applications (e.g., anomaly detection and drug discovery) is critical for increasing transparency and guiding improvements. Moreover, human expertise can guide machine learning decisions, raising questions about the interactions between humans and artificial intelligence that require further analysis.
This dissertation focuses on our research on two key topics: transparent representation learning on graphs and human-AI collaboration. Firstly, we present our proposed frameworks for graph anomaly detection, which have been developed to enhance accuracy and transparency. Subsequently, we scrutinize explainability in graph machine learning, where we discuss our novel post-hoc global counterfactual and robust ante-hoc graph explainers. Fairness is also a crucial aspect of transparent machine learning, and we propose a new individual fairness method for clustering. Finally, we investigate the impact of collaboration between humans and artificial intelligence on decision-making under risk and feedback loop systems.