We present a model of semantic memory that utilizes a high-dimensional semantic space constructed from a co-occurrence matrix. This matrix was formed by analyzing a 160 million word corpus. Word vectors were then obtained by extracting rows and columns of this matrix. These vectors were subjected to multidimensional scaling. Words were found to cluster semantically, suggesting that interword distance may be interpretable as a measure of semantic similarity. In attempting to replicate with our simulation the semantic and associative priming experiment by Shelton and Martin (1992), we found that semantic similarity plays a larger role in priming than what they would suggest. Vectors were formed for three different types of related words that may more orthogonally control for association and similarity, and interpair distances were computed for both related and unrelated prime-target pairs. A priming effect was found for pairs that were only semantically related, as well as for word pairs that were both semantically and associatively related. No priming was found for word pairs which were strictly associatively related (no semantic overlap). This finding was replicated in a single-word priming experiment using a lexical decision procedure with human subjects. The lack of associative priming is discussed in relation to prior experiments that have found robust associative priming. W e conclude that our priming results are driven by semantic overlap rather than by associativity, and that prior results finding associative priming are due, at least in part, to semantic overlap within the associated word pairs.