Skip to main content
eScholarship
Open Access Publications from the University of California

Compositional Generalization in a Graph-based Model of Distributional Semantics

Abstract

A critical part of language comprehension is inferring omitted but plausible information from linguistic descriptions of events. For instance, the verb phrase ‘preserve vegetable’ implies the instrument vinegar whereas ‘preserve fruit’ implies dehydrator. We studied the ability of distributional semantic models to perform this kind of semantic inference after being trained on an artificial corpus with strictly controlled constraints on which verb phrases occur with which instruments. Importantly, the ability to infer omitted but plausible instruments in our task requires compositional generalization. We found that contemporary neural network models fall short generalizing learned selectional constraints, and that a graph-based distributional semantic model trained on constituency-parsed data and equipped with a spreading-activation procedure for calculating semantic relatedness, achieves perfect performance. Our findings shed light on the mechanisms that give rise to compositional generalization, and using graphs to model semantic memory.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View