Skip to main content
eScholarship
Open Access Publications from the University of California

UCSF

UC San Francisco Electronic Theses and Dissertations bannerUCSF

Representation of directly measured speech movements in human sensorimotor cortex

Abstract

During speech production, we make vocal tract movements with remarkable precision and speed.

Starting with the earliest cortical stimulation studies, we have learned much about what brain regions

are involved with speech motor control. However, our understanding of how activity in these regions

gives rise to the movements made is limited, in part due to the challenge of simultaneously acquiring

high-resolution neural recordings and detailed vocal tract measurements. A complete neurobiological

understanding of speech motor control requires determination of the relationship between simultaneously

recorded neural activity and the kinematics of the speech articulators (i.e, lips, jaw, and tongue).

Recent advances in human electrophysiological recordings allow us to observe neural activity in these

regions with unprecedented resolution, but without concurrently measuring the speech articulators it

is difficult to interpret this activity. To overcome this challenge, we combined ultrasound and video

monitoring of the supralaryngeal articulators (lips, jaw and tongue) with electrocorticographic (ECoG)

recordings from the cortical surface to investigate how neural activity relates to measured articulator

movement kinematics (position, speed, velocity, acceleration) during the production of English vowels.

In this document, we first provide a review of the functional organization of primary speech motor

cortex, also called ventral sensory motor cortex (vSMC). Next, we describe and validate methods for

a noninvasive, multi-modal imaging system to monitor vocal tract kinematics that is compatible with

bedside human neurophysiological recordings. Last, we use these methods to examine the relationship

between activity in vSMC and the kinematics of speech articulator movements. These findings

demonstrate novel insights into how articulatory kinematic parameters are encoded in vSMC during

speech production.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View