Skip to main content
eScholarship
Open Access Publications from the University of California

Similarity in object properties supports cross-situational word learning: Predictions from a dynamic neural model confirmed

Abstract

Learning names for novel objects has been shown to be impacted by the context in which they appear. Manipulations of context, therefore, provide a key pathway to explore these learning dynamics. Here we use a neural process model that instantiates the details of ‘context' to generate novel, counterintuitive predictions about how similarity in object properties influence learning. Specifically, we use a dynamic field model, WOLVES, to simulate and predict learning in a cross-situational word learning task in two conditions: one where the two objects presented on each learning trial are metrically similar in a property (‘NEAR') and another condition where the two objects are always dissimilar (‘FAR'). WOLVES predicts—counterintuitively—that participants should learn better in the ‘NEAR' condition (where objects are potentially confusable) than in ‘FAR' condition (where objects are distinctive). We then tested this prediction empirically, finding support for the novel prediction. This study shows the utility of process models which instantiate the details of ‘context' during learning and provides support for WOLVES. We know of no other theory of cross-situational word learning that captures these novel findings.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View