Skip to main content
eScholarship
Open Access Publications from the University of California

Neural signals and control of the larynx

  • Author(s): Dichter, Benjamin K
  • Advisor(s): Chang, Edward F
  • et al.
Abstract

The ability of the human brain to represent senses reliably and command a motor response is central to our ability to respond to the world around us. Here, I aim to understand these processes through simulation and experimental analysis. First I developed a recurrent neural network as a model of the brain receiving approximate sensory input. By learning to capture the distribution of the representation of a sense, the network learns the dynamics of the underlying stimulus and learns to integrate information near optimally over time, using recent estimates of position and velocity to inform the current estimate of the state of the object.

Next, I analyzed the cortical representation of auditory speech. Syllables were played to human subjects while recording voltage fluctuations directly from their brains using electrocorticography. I found that the variability of the neural activity in the superior temporal gyrus, an auditory cortical region, was “quenched” upon stimulus presentation. Furthermore, this decrease in variability is coincident with stimulus representation, and enables the brain to represent a stimulus more accurately.

Then, I examined the cortical control of laryngeal functions in humans using electrocorticography during produced speech. I found that the dorsal laryngeal motor cortex controls modulations of vocal pitch. Activity in that region is correlated with pitch in speech and in song. The representation of pitch in this region is separable from voicing, showing multiple dimensions of control represented in the cortex. Through cortical stimulation, I show that activity in this area caused proportional laryngeal muscle activation. I discuss how these findings may add important information furthering our understanding the evolution of speech in humans. Finally, I discuss how these signals can be used to decode prosodic patterns directly from neural activity for use in a speech prosthetic.

Main Content
Current View