Visual working memory, a complex cognitive process that is essential for goal-directed behaviors, allows the internal maintenance and manipulation of detailed visual information that is no longer available in the environment. The neural processes and representations that support this essential ability remain the focus of much research and debate. In this dissertation, I present three experiments that tested key predictions of a sensory recruitment model of visual working memory, which proposes that the same regions responsible for primary sensory processing are recruited to maintain precise sensory details over short delays. All of these experiments involved the collection of functional magnetic resonance imaging (fMRI) data while participants performed cognitive tasks requiring visual working memory. The primary fMRI analyses described here utilize two multivariate approaches that model the information content of distributed patterns of brain activity: inverted encoding models and multivoxel pattern analysis.
The first chapter of this dissertation uses an inverted encoding model to reconstruct simple orientation information held in working memory and examines the effect of subsequent visual input on these memory representations. Here, I show that visual working memory representations are flexible and can dynamically adjust to meet task demands. First, I find that the early visual areas maintain precise orientation information over a delay, but that these representations are susceptible to bias from visual interference. Further, I find that the intraparietal sulcus redundantly represents orientation information in anticipation of possible distraction and continues to do so if visual interference renders early visual cortical representations unreliable.
In the second chapter, I present strong evidence in favor of a sensory recruitment model of visual working memory for complex images like human faces. Here, I use an encoding model to characterize distributed response patterns in face-selective regions of inferior temporal cortex during face perception. I show that by inverting this perceptual encoding model and applying it to data from a memory delay, I can successfully reconstruct face information held in working memory. Importantly, this perception-to-memory generalization implies a common representational structure between perceptual and mnemonic codes.
The final chapter examines the role of lateral prefrontal control processes in shaping successful visual working memory performance, by examining fMRI data collected after disruption of inferior frontal gyrus activity with transcranial magnetic stimulation. Here, I provide causal evidence that the lateral prefrontal cortex modulates the selectivity of working memory representations in extrastriate visual cortex. In addition, I find that the inferior frontal gyrus is involved in determining the task-relevance of visual input and communicating that information to a network of regions involved in further processing during visual working memory.
Together, these experiments contribute to our understanding of how the human brain represents both basic visual features and complex images like human faces and visual scenes. I find evidence for the recruitment of sensory regions in the maintenance of precise visual information in memory and for top-down prefrontal and parietal control processes that shape the selectivity and distractor-resistance of these mnemonic representations.