Skip to main content
eScholarship
Open Access Publications from the University of California

Evaluating systematicity in neural networks with natural language inference

Abstract

Compositionality makes linguistic creativity possible. By combining words, we can express uncountably many thoughts;by learning new words, we can extend the system and express a vast number of new thoughts. Recently, a numberof studies have questioned the ability of neural networks to generalize compositionally (Dasgupta, Guo, Gershman &Goodman, 2018). We extend this line of work by systematically investigating the way in which these systems generalizenovel words.In the setting of a simple system for natural language inference, natural logic (McCartney & Manning, 2007), we systemat-ically explore the generalization capabilities of various neural network architectures. We identify several key properties ofa compositional system, and develop metrics to test them. We show that these architectures do not generalize in human-likeways, lacking inductive leaps characteristic of human learning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View