Skip to main content
eScholarship
Open Access Publications from the University of California

UCSF

UC San Francisco Electronic Theses and Dissertations bannerUCSF

Cortical dynamics of speech motor sequencing and production

Abstract

Speech is one of the most efficient and effortless ways to communicate. Producing speech requires planning speech targets, sequencing speech-motor movements, and coordinating a dynamic system of articulators to shape breath in real time, generating the sounds we perceive and interpret as needs, ideas, and emotions. Loss of this ability, through neurodegenerative disease or paralysis, is devastating and reduces self-reported quality of life. For many with this condition, the cortical signals to control their articulators still persist--however these signals cannot be communicated to their vocal tract, due to injured or diseased descending pathways leading to vocal tract paralysis. Recent advances in neural recording hardware, our understanding of how speech production is controlled in the brain (specifically in the ventral sensorimotor cortex), and machine learning have enabled the development of speech brain computer interfaces (BCI). A direct speech BCI would translate neural signals into intended speech, restoring the ability to communicate to these individuals. This body of work first demonstrates a proof of concept that a direct speech BCI consisting of a 50-word vocabulary can be developed using high-density neural recording hardware, called electrocorticography, in a participant who cannot speak due to severe paralysis. We then built upon this proof-of-concept by developing a spelling-based speech BCI that could be controlled by silently attempted speech. The methods we used for these studies were based on our understanding of how articulatory movements are controlled by neural signals. However, much less is known about how the brain controls the upstream processes of planning and sequencing these movements. Motivated by the success of translating neuroscientific findings into BCI development, we next sought to understand how speech is sequenced in the brain. Using a task where healthy speakers spoke syllable sequences of varying complexity, we found both neural activity specific to production and widespread sustained activity associated with planning syllable sequences. This network, consisting both of areas classically considered involved in speech planning, such as Broca’s area, as well as more novel regions like the middle precentral gyrus (mPrCG), was modulated by the complexity of the sequences. However, only the mPrCG demonstrated robust sustained activity, sequence complexity encoding, and was correlated with the participants reaction time, suggesting that this area’s role in speech planning is specific to speech-motor sequencing. We confirmed this by using direct cortical stimulation, which induced speech errors only during complex sequences, in the absence of direct motor or perceptual effects. This work establishes the mPrCG as a critical node of speech-motor sequencing, redefining traditional notions of how the brain sequences and produces speech. Together, these studies demonstrate the potential of speech brain computer interfaces for restoring speech in paralyzed individuals and puts forth new possibilities for neurobiologically informed algorithms for decoding speech.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View