Skip to main content
eScholarship
Open Access Publications from the University of California

Leveraging Unstructured Statistical Knowledge in aProbabilistic Language of Thought

Creative Commons 'BY' version 4.0 license
Abstract

One hallmark of human reasoning is that we can bring to beara diverse web of common-sense knowledge in any situation.The vastness of our knowledge poses a challenge for the prac-tical implementation of reasoning systems as well as for ourcognitive theories – how do people represent their common-sense knowledge? On the one hand, our best models of so-phisticated reasoning are top-down, making use primarily ofsymbolically-encoded knowledge. On the other, much of ourunderstanding of the statistical properties of our environmentmay arise in a bottom-up fashion, for example through asso-ciationist learning mechanisms. Indeed, recent advances in AIhave enabled the development of billion-parameter languagemodels that can scour for patterns in gigabytes of text from theweb, picking up a surprising amount of common-sense knowl-edge along the way—but they fail to learn the structure of co-herent reasoning. We propose combining these approaches, byem- bedding language-model-backed primitives into a state-of-the-art probabilistic programming language (PPL). On twoopen-ended reasoning tasks, we show that our PPL modelswith neural knowledge components characterize the distribu-tion of human responses more accurately than the neural lan-guage models alone, raising interesting questions about howpeople might use language as an interface to common-senseknowledge, and suggesting that building probabilistic modelswith neural language-model components may be a promisingapproach for more human-like AI.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View