Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Bayesian Models of Learning and Reasoning with Relations

Abstract

How do humans acquire relational concepts such as larger, which are essential for analogical inference and other forms of high-level reasoning? Are they necessarily innate, or can they be learned from non-relational inputs? Using comparative relations as a model domain, we show that structured relations can be learned from unstructured inputs of realistic complexity, applying bottom-up Bayesian learning mechanisms that make minimal assumptions about innate representations. First, we introduce Bayesian Analogy with Relational Transformations (BART), which represents relations as probabilistic weight distributions over object features. BART learns two-place relations such as larger by bootstrapping from empirical priors derived from initial learning of one-place predicates such as large. The learned relational representations allow classification of novel pairs and yield the kind of distance effect observed in both humans and other primates. Furthermore, BART can transform its learned weight distributions to reliably solve four-term analogies based on higher-order relations such as opposite (e.g., larger:smaller :: fiercer:meeker). Next, we present BARTlet, a representationally simpler version of BART that models how symbolic magnitudes (e.g., size or intelligence of animals) are derived, represented, and compared. BARTlet creates magnitude distributions for objects by applying BART-like weights for categorical predicates such as large (learned with the aid of empirical priors derived from pre-categorical comparisons) to more primitive object features. By incorporating psychological reference points that control the precision of these magnitudes in working memory, BARTlet can account for a wide range of empirical phenomena involving magnitude comparisons, including the distance effect, the congruity effect, the markedness effect, and sensitivity to the range of stimuli. Finally, we extend the original discriminative BART model to generate (rather than classify) relational instances, allowing it to make quasi-deductive transitive inferences (e.g., "If A is larger than B and B is larger than C, then A is larger than C") and predict human responses to questions such as, "What is an animal that is smaller than a dog?" Our work is the first demonstration that relations and symbolic magnitudes can be learned from complex non-relational inputs by bootstrapping from prior learning of simpler concepts, enabling human-like analogical, comparative, generative, and deductive reasoning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View