Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Neurosymbolic Learning and Reasoning for Trustworthy AI

Abstract

Along with the ubiquitous applications of Artificial Intelligence (AI), the quest for developing trustworthy AI models intensifies. Deep neural networks, while powerful in learning, fall short in reasoning with domain knowledge and offering robustness guarantees. Neurosymbolic AI bridges this gap by melding the learning capabilities of neural networks and reasoning techniques from symbolic AI, thus building models that behave as intended. This dissertation demonstrates my work that addresses the two fundamental challenges in neurosymbolic AI: 1) enabling differentiable learning of deep neural networks under symbolic constraints and 2) performing scalable and reliable probabilistic reasoning over expressive symbolic constraints. It presents how these neurosymbolic approaches achieve trustworthiness through explainability, uncertainty quantification, and domain-knowledge incorporation. These contributions enable broader applications of neurosymbolic AI in various domains including scientific discoveries.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View