Skip to main content
eScholarship
Open Access Publications from the University of California

Evaluating vector-space models of analogy

Abstract

Vector-space representations provide geometric tools for rea-soning about the similarity of a set of objects and their relation-ships. Recent machine learning methods for deriving vector-space embeddings of words (e.g., word2vec) have achievedconsiderable success in natural language processing. Thesevector spaces have also been shown to exhibit a surprising ca-pacity to capture verbal analogies, with similar results for nat-ural images, giving new life to a classic model of analogies asparallelograms that was first proposed by cognitive scientists.We evaluate the parallelogram model of analogy as applied tomodern word embeddings, providing a detailed analysis of theextent to which this approach captures human relational sim-ilarity judgments in a large benchmark dataset. We find thatthat some semantic relationships are better captured than oth-ers. We then provide evidence for deeper limitations of the par-allelogram model based on the intrinsic geometric constraintsof vector spaces, paralleling classic results for first-order simi-larity.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View