Skip to main content
eScholarship
Open Access Publications from the University of California

Multimodal Event Knowledge in Online Sentence Comprehension: The Influenceof Visual Context on Anticipatory Eye Movements

Abstract

People predict incoming words during online sentencecomprehension based on their knowledge of real-world eventsthat is cued by preceding linguistic contexts. We used thevisual world paradigm to investigate how event knowledgeactivated by an agent-verb pair is integrated with perceptualinformation about the referent that fits the patient role. Duringthe verb time window participants looked significantly more atthe referents that are expected given the agent-verb pair.Results are consistent with the assumption that event-basedknowledge involves perceptual properties of typicalparticipants. The knowledge activated by the agent iscompositionally integrated with knowledge cued by the verbto drive anticipatory eye movements during sentencecomprehension based on the expectations associated not onlywith the incoming word, but also with the visual features of itsreferent.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View