Skip to main content
eScholarship
Open Access Publications from the University of California

Auditory Cortex Activity Modulation in Response to Sensory Feedback during Audiomotor Map Learning

  • Author(s): Goel, Mahima
  • Advisor(s): Nagarajan, Srikantan
  • et al.
Abstract

Sensory feedback plays an important role in maintaining steady and fluent speech production. So far, a majority of research in speech has focused on auditory feedback while not a lot has been done on somatosensory feedback. Thus, the current study aims to further explore the effect of vocal tract somatosensory feedback on auditory cortex activity during the process of audiomotor map learning. Due to extensive evidence for a phenomenon known as motor-induced suppression (MIS), the current study hypothesizes that cortical activity will be reduced in subjects after the establishment of learning. Using an MEG touchscreen speech synthesizer set- up, subjects heard a target vowel sound and were asked to touch a location on the touchscreen that matched the sound they heard. With each trial, subjects were given feedback in the form of the sound that corresponded to the location on the screen they touched, resulting in them eventually learning to map out each vowel sound to a target location on the screen. This set-up allowed a paradigm to test for the effect of audiomotor map learning. Then, using the NUTMEG software and Champagne source localization algorithm, data from each subject was analyzed before and after learning, as well as any potential differences between auditory and motor feedback or left and right auditory cortices were noted. Findings across all 4 conditions (left auditory cortex with auditory feedback, right auditory cortex with auditory feedback, left auditory cortex with auditory and somatomotor feedback, right auditory cortex with auditory and somatomotor feedback) in almost all the subjects found a statistically significant decrease in cortical activity following the establishment of audiomotor map learning. The current study sets up future work in which a variety of patient populations as well as different forms of feedback can be studied using a touchscreen speech synthesizing platform.

Main Content
Current View