Skip to main content
eScholarship
Open Access Publications from the University of California

Distributional Language Models and Representing Multiple Kinds of Semantic Relations

Abstract

Distributional models have been successfully used to model a wide range of semantic behaviors. One notable problem with distributional models is that they lack a way to distinctly represent different kinds of semantic relations within a single semantic space. Here, we propose that neural network languages models can sensibly be interpreted as representing syntagmatic (co-occurrence) relations using their input-output mappings, and as representing paradigmatic (substitutability) relations using the similarity of their internal representations. We test this proposal on three neural network architectures (SRNs, LSTMs, and Word2Vec) using a carefully constructed artificial language corpus. The sentences in the corpus have systematic relationships such that words belonging to categories that predict what other categories are allowed to occur in the same sentence. Using this corpus, we show that the models display interesting but understandable differences in their ability to represent these two kinds of relationships.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View