Skip to main content
eScholarship
Open Access Publications from the University of California

Neural evidence of visual-spatial influence on aural-verbal processes

Abstract

Everyday tasks demand attentional resources to perceive, process, and respond to important information. Attempting to complete multiple tasks simultaneously, that is, multitasking, necessarily requires more resources than completing either task alone. Allocating common resources among two or more difficult tasks will lead to competition and result in performance deficits to one or more of the to-be-completed tasks. Multiple resource theory suggests separate pools for perceiving (aural, visual, tactile), processing (verbal, spatial), and responding (vocal, manual), but a common overarching resource pool still exists and is heavily taxed for the management of multiple ongoing tasks. We use the combination of neural activity and performance to estimate the degree to which the demands of a visual-spatial-manual (VSM) task impedes the performance of an auditory-verbal-vocal (AVV) task, where each taxes independent pools of attentional resources. We found AVV performance decreased when paired with a more difficult VSM task. Using components from group-level event related potentials (ERPs), we draw conclusions to estimate how and why cross-modal task performance changes, and diagnose resource bottlenecks and limitations. Specifically, we find auditory evoked potentials, P300, and Reorienting Negativity serve as fruitful indicators of not only high or low cross-modal load, but are predictive of (in)correct trial performance. Further, we discuss how these indicators provide insight to the underlying mechanisms driving misses, and whether crossmodal bottlenecks may occur at the perceptual, cognitive, or response stage.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View