Skip to main content
eScholarship
Open Access Publications from the University of California

Leveraging rapid scene perception in attentional learning

Creative Commons 'BY' version 4.0 license
Abstract

In addition to saliency and goal-based factors, a scene’s semantic content has been shown to guide attention in visual search tasks. Here, we ask if this rapidly available guidance signal can be leveraged to learn new attentional strategies. In a variant of the scene preview paradigm (Castelhano & Heaven, 2010), participants searched for targets embedded in real-world scenes with target locations linked to scene gist. We found that activating gist with scene previews significantly increased search efficiency over time in a manner consistent with formal theories of skill acquisition. We combine VGG16 and EBRW to provide a biologically inspired account of the gist preview advantage and its effects on learning in gist-guided attention. Preliminary model results suggest that, when preview information is useful, stimulus features may amplify the similarities and differences between exemplars.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View