Extraction of Event Roles From Visual Scenes is Rapid, Automatic, and Interacts with Higher-Level Visual Processing
Skip to main content
eScholarship
Open Access Publications from the University of California

Extraction of Event Roles From Visual Scenes is Rapid, Automatic, and Interacts with Higher-Level Visual Processing

Abstract

A crucial component of event recognition is understanding the roles that people and objects take: did the boy hit the girl, or did the girl hit the boy? We often make these categorizations from visual input, but even when our attention is otherwise occupied, do we automatically analyze the world in terms of event structure? In two experiments, participants made speeded gender judgments for a continuous sequence of male-female interaction scenes. Even though gender was orthogonal to event roles (whether the Agent was male or Female, or vice- versa), a switching cost was observed when the target character’s role reversed from trial to trial, regardless of whether the actors, events, or side of the target character differed. Crucially, this effect held even when nothing in the task required attention to the relationship between actors. Our results suggest that extraction of event structure in visual scenes is a rapid and automatic process.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View