Skip to main content
eScholarship
Open Access Publications from the University of California

Listeners Optimally Integrate Acoustic and Semantic Cues Across Time During Spoken Word Recognition

Creative Commons 'BY' version 4.0 license
Abstract

Understanding spoken words requires listeners to integrate large amounts of linguistic information over time. There has been considerable debate about how semantic context preceding or following a target word affects its recognition, with preceding semantic context often viewed as a constraint on possible future words, and following semantic context as a mechanism for disambiguating previous ambiguous input. Surprisingly, no studies have directly compared whether the timing of semantic context influences spoken word recognition. The current study manipulates the acoustic-perceptual features of a target word, a semantic cue elsewhere in the sentence biasing toward one interpretation, and the location of the semantic context. We find that the two cues are additively integrated in participants' word identification responses, and that semantic context affects categorization the same regardless of where it appears relative to the target word. This suggests that listeners can optimally integrate acoustic-perceptual and semantic information across time.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View