- Main
Neurosymbolic Learning and Reasoning for Trustworthy AI
- Zeng, Zhe
- Advisor(s): Van den Broeck, Guy
Abstract
Along with the ubiquitous applications of Artificial Intelligence (AI), the quest for developing trustworthy AI models intensifies. Deep neural networks, while powerful in learning, fall short in reasoning with domain knowledge and offering robustness guarantees. Neurosymbolic AI bridges this gap by melding the learning capabilities of neural networks and reasoning techniques from symbolic AI, thus building models that behave as intended. This dissertation demonstrates my work that addresses the two fundamental challenges in neurosymbolic AI: 1) enabling differentiable learning of deep neural networks under symbolic constraints and 2) performing scalable and reliable probabilistic reasoning over expressive symbolic constraints. It presents how these neurosymbolic approaches achieve trustworthiness through explainability, uncertainty quantification, and domain-knowledge incorporation. These contributions enable broader applications of neurosymbolic AI in various domains including scientific discoveries.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-