Language Representation in Human Cerebral Cortex
- Gong, Xue
- Advisor(s): Theunissen, Frédéric E;
- Gallant, Jack L
Abstract
To comprehend language, the human brain transforms sound pressure waveforms (speech) and visual patterns (reading) into meaningful words through a set of dynamic and intermediate representations. This capability is particularly evident in bilinguals who can extract the same meaning from different spoken and written languages despite the distinctions in acoustic and visual inputs. However, our understanding of these complex brain functions remains limited. This dissertation presents three functional magnetic resonance imaging (fMRI) experiments that explore how semantic information is extracted from acoustic and visual patterns in the cerebral cortex of monolingual English speakers and Mandarin-English bilingual speakers. Using naturalistic stimuli and voxelwise encoding models, these experiments investigate the cerebral cortical representation of visual, acoustic, articulatory, phonemic, orthographic, semantic, and syntactic components of language. The first experiment (Chapter 2) reveals that human brains segment continuous speech into diphones to facilitate semantic comprehension, identifying two putative brain regions where phonemic processing transitions to semantic representation. The second experiment (Chapter 3) maps the cerebral cortical representation of visual, orthographic, and semantic aspects during reading, demonstrating that language processing involves simultaneous interaction across multiple information levels, rather than a series of independent steps. The third experiment (Chapter 4) compares these processes between English and Mandarin, showing while visual and semantic representations are similar across languages, orthographic processing varies significantly.