Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Probabilistic Reasoning for Fair and Robust Decision Making

Abstract

Automated decision-making systems are increasingly being deployed in areas with high personal and societal impact. This naturally led to growing interest in trustworthy artificial intelligence (AI) and machine learning (ML), encompassing many fields of research including algorithmic fairness, robustness, explainability, privacy, and more. These works share a common theme of questioning and moderating the behavior of automated tools in various real-world settings, which inherently exhibit different uncertainties.

This dissertation explores how probabilistic modeling and reasoning as a framework offer a principled way to handle uncertainties when addressing trustworthy AI issues, in particular by explicitly modeling the underlying distribution of the world. The main contributions are as follows. First, it demonstrates that many problems in trustworthy AI can be cast as probabilistic reasoning tasks of varying complexities. Secondly, it proposes algorithms to learn fair and robust decision-making systems, while handling many sources of uncertainties such as missing or biased labels at training time and missing features at prediction time. The proposed approach relies heavily on probabilistic models that are expressive enough to describe the world underlying the system, whilst being tractable enough to answer various probabilistic queries. The final contribution of this thesis is showing that probabilistic circuits are an effective model for this framework and expanding their reasoning capabilities even further.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View