Skip to main content
eScholarship
Open Access Publications from the University of California

Integration of gaze information during online language comprehension andlearning

Abstract

Face-to-face communication provides access to visual information that can support language processing. But do listenersautomatically seek social information without regard to the language processing task? Here, we present two eye-trackingstudies that ask whether listeners’ knowledge of word-object links changes how they actively gather a social cue to refer-ence (eye gaze) during real-time language processing. First, when processing familiar words, children and adults did notdelay their gaze shifts to seek a disambiguating gaze cue. When processing novel words, however, children and adultsfixated longer on a speaker who provided a gaze cue, which led to an increase in looking to the named object and lesslooking to the other objects in the scene. These results suggest that listeners use their knowledge of object labels whendeciding how to allocate visual attention to social partners, which in turn changes the visual input to language processingmechanisms.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View