A neural network model of referent identification in the inter-modal preference looking task
Skip to main content
eScholarship
Open Access Publications from the University of California

A neural network model of referent identification in the inter-modal preference looking task

Creative Commons 'BY' version 4.0 license
Abstract

We present a neural network model of referent identification in a preferential looking task. The inputs are visual representations of pairs of objects concurrent with unfolding sequences of phonemes identifying the target object. The model is trained to output the semantic representation of the target object and to suppress the semantic representation of the distractor object. Referent identification is achieved in the model based only on bottom-up processing. The training set uses a lexicon of 200 words and their visual and semantic referents, reported by parents as typically known by toddlers. The phonological, visual and semantic representations are derived from real corpora. The model successfully replicates experimental evidence that phonological, perceptual and categorical relationships between target and distractor modulate the temporal pattern of visual attention. In particular, the network captures early effects of phonological similarity, followed by later effects of semantic similarity on referent identification.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View