Computational Modeling of Speech Production and Aphasia
- Author(s): Walker, Grant
- Advisor(s): Hickok, Gregory
- et al.
We investigated a new computational model of speech production, the Semantic-Lexical-Auditory-Motor Model (SLAM), which was designed to test a critical assumption of the Hierarchical State Feedback Control theory (Hickok, 2012): specifically, that speech production relies on the coordination of dual representations in auditory and motor cortices, with auditory representations serving as targets during speech planning. The computational details are based on the interactive, two-step lexical retrieval model of Foygel and Dell (2000); our novel architecture allows us to predict the consequences of different patterns of damage among the proposed speech representations. The additional model structure is expected to better explain conduction aphasia in particular. We analyzed archived picture naming data from 255 people with aphasia in Philadelphia, PA, in addition to new data from another 95 people with aphasia in Columbia, SC. We found that the SLAM model made adequate predictions generally, and it did improve the fit to data from conduction patients specifically and in the expected manner. We also analyzed neuroanatomical data in the form of lesion masks standardized to a template for 83 of the participants from the SC cohort. Although we were unable to replicate a study to localize brain regions where damage leads to a significant increase in particular error types, a behavioral comparison of the different cohorts revealed the potential sampling variability that exists in the aphasia population. Next, we developed a Bayesian approach to estimating the parameters of the lexical network, providing a more comprehensive assessment of the model’s quality. Additionally, we simulated word and nonword repetition tasks with our network, generating new predictions for a subset of 28 people with aphasia and unimpaired hearing. We found that the 4-parameter SLAM model could simultaneously find good fits for the frequencies of 6 naming and 3 nonword repetition response types while also correctly predicting a novel set of 6 word repetition response types. The conduction patients' data was best explained by strong lexical-auditory and weak auditory-motor connections. Our results demonstrate that the assumption of coordinated speech representations in auditory and motor cortices can lead to viable predictions of speech production behavior in aphasia.