Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Learning How and Why: Causal Learning and Explanation from Physical, Interactive, and Communicative Environments

Abstract

Artificial agents expected to operate alongside humans in daily life will be expected to handle novel circumstances and explain their behavior to humans. In this dissertation, we examine these two concepts from the perspective of generalization and explanation. Generalization relies on having a learning algorithm capable of performing well in unseen circumstances and updating the model to handle the novel circumstance. In practice, learning algorithms must be equipped with mechanisms that enable generalization. Here, we examine the generalization question from multiple perspectives, namely imitation learning and causal learning. We show that generalization performance benefits from understanding abstract high-level task structure and low-level perceptual inductive biases. We also examine explanations in imitation learning and communicative learning paradigms. These explanations are intended to foster human trust and address the value alignment problem between humans and machines. In the imitation learning setting, we show that the model components that best contribute to fostering human trust do not necessarily correspond to the model components contributing most to task performance. In the communicative learning paradigm, we show how theory of mind can align a machine's values to the preferences of a human user. Taken together, this dissertation helps address two of the most critical problems facing AI systems today: machine performance in unseen scenarios and human-machine trust.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View