Skip to main content
eScholarship
Open Access Publications from the University of California

Inferential Role Semantics for Natural Language

Abstract

Cognitive models have long been used to study linguistic phe-nomena spanning the domains of phonology, syntax, and se-mantics. Of these domains, semantics is somewhat unique inthat there is little clarity concerning what a model needs to beable to do in order to provide an account of how the mean-ings of complex linguistic expressions, such as sentences, areunderstood. To help address this problem, we introduce a tree-structured neural model that is trained to generate further sen-tences that follow from an input sentence. These further sen-tences chart out the “inferential role” of the input sentence,which we argue constitutes an important part of its meaning.The model is trained using the Stanford Natural Language In-ference (SNLI) dataset, and to evaluate its performance, we re-port entailment prediction accuracies on a set of test sentencesnot present in the training data. We also report the results of asimple study that compares human plausibility ratings for bothground-truth and model-generated entailments for a randomselection of sentences in this test set. Finally, we examine anumber of qualitative features of the model’s ability to gener-alize. Taken together, these analyses indicate that our modelis able to accurately account for important inferential relation-ships amongst linguistic expressions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View