Predicting semantic similarity judgments is often modeled as
a three-step process: collecting feature ratings along multiple
dimensions (e.g., size, shape, color), computing similarities
along each dimension, and combining the latter into an
aggregate measure (Nosofsky, 1985). However, such models
fail to account for over half of the variance in similarity
judgments pertaining to complex, real-world objects (e.g.,
elephant and bear), even when taking into account their
description along dozens of dimensions. To help explain this
prediction gap, we propose a two-fold approach. First, we
provide the first empirical evidence of a mismatch between
similarity predicted by feature ratings and that reported by
participants directly along individual dimensions. Second, we
show that, surprisingly, separate sub-domains within directly
reported dimension-specific similarities carry different
amounts of information for predicting object-level similarity
judgments. Accordingly, we show that differentially
weighting directly reported dimension-specific similarity sub-
domains significantly improves prediction of free (i.e.,
unconstrained) semantic similarity judgments.