Semantic Representation in the Human Brain
- Author(s): Huth, Alexander
- Advisor(s): Gallant, Jack L
- et al.
The goal of the human sensory system is to build a useful internal representation of the world. In vision, this means that the brain categorizes and identifies all the objects and actions that are being observed. In language, it means that the brain understands the meaning of each word that is being perceived, and integrates across words to understand the meaning of a narrative. Both of these processes can be seen as extracting the meaning, or semantic content, from a dense stream of sensory information. We know very little about how the brain accomplishes these feats. Indeed, entire fields of research (computer vision and natural language processing) are devoted to reproducing on a computer feats of understanding that the human brain accomplishes with ease. We may be able to gain insight into these processes by studying how the semantic information extracted from visual and linguistic stimuli is represented in the brain.
This dissertation describes three functional magnetic resonance imaging (fMRI) experiments that have helped to reveal how visual and linguistic semantic information are represented across the human cerebral cortex. These experiments relied on a relatively new fMRI analysis methodology known as voxel-wise modeling (VM). Although this methodology was developed for modeling how the brain represents the structure of visual information, it was adapted here for modeling representation of the extremely complex semantic information present in natural movies and natural narrative stories.
The first experiment (Chapter 2) showed that information about object and action categories present in natural movies is represented in a low-dimensional semantic space that is shared across subjects. Projecting this semantic space across the cortex revealed that semantic information is represented in broad cortical gradients that cover a surprising amount of the cortical surface. The second experiment (Chapter 3) showed that information about the semantic content of narrative spoken stories is also represented in a low-dimensional space that is shared across subjects. To model the complex cortical maps for this semantic space a new technique was developed called PrAGMATiC (Probabilistic And Generative Model of Areas Tiling the Cortex). This new technique revealed that the cortical semantic maps can be explained by about 230 functional areas that cover much of the prefrontal cortex, temporoparietal junction, precuneus, and temporal cortex. Finally, the third experiment (Chapter 4) showed that a novel hierarchical logistic regression (HLR) model could accurately decode the categories of objects and actions present in natural movies from fMRI responses.