Skip to main content
Open Access Publications from the University of California

Core words in semantic representation


A central question in cognitive science is how semantic information is mentally represented. Two dominant theories of semantic representation are language-based distributional semantic models (which suggest that word meaning is based on which words co-occur in language) and semantic networks based on word associations (which suggest that words are represented as a network in which words with closer meanings are more closely linked). We investigate the issue of semantic representation through the lens of core vocabulary -- the set of words that are most central in the mental lexicon -- which these two theories make different predictions about. We report on the results of an experiment that tests which measure of core vocabulary most closely aligns with human behaviour in a word-guessing game where the aim was to identify a target word given a set of semantically related words as hints. Target and hint words, which varied across trials, were generated from different core vocabulary lists corresponding to these different theories. Results revealed that the type of hint words did not affect performance, but that better performance was attained for target words derived from word associations than from natural language distributional statistics. Follow-up analyses ruled out several alternate explanations. Our results suggest that the semantic information reflected in word associations may be more involved in the efficient identification of lexical meaning.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View