Constructing Word Meaning without Latent Representations using Spreading Activation
Skip to main content
eScholarship
Open Access Publications from the University of California

Constructing Word Meaning without Latent Representations using Spreading Activation

Creative Commons 'BY' version 4.0 license
Abstract

Models of word meaning, like the Topics model (Griffiths et al., 2007) and word2vec (Mikolov et al., 2013), condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy etc.). However, their reliance on latent representations leaves them vulnerable to interference and makes them slow learners. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement our spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains such as natural language text. After fixing the corpus across models, we show that spreading activation using a Dynamic-Eigen-Net outperforms the Topics model and word2vec in several cases when predicting human free associations and word similarity ratings. We argue in favour of the Dynamic-Eigen-Net as a fast learner that is not subject to catastrophic interference, and present it as an example of delegating the induction of latent relationships to process assumptions instead of assumptions about representation.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View