Audiovisual Spoken Word Processing in Typical-Hearing and Cochlear Implant- Using Children: an ERP Investigation
Skip to main content
eScholarship
Open Access Publications from the University of California

UC Davis

UC Davis Electronic Theses and Dissertations bannerUC Davis

Audiovisual Spoken Word Processing in Typical-Hearing and Cochlear Implant- Using Children: an ERP Investigation

Abstract

The process of spoken word recognition is influenced by both bottom-up sensory information and topdowncognitive information. These cues are used to process the phonological and semantic representations of speech. Several studies have used EEG/ERPs to study the neural mechanisms of children’s spoken word recognition, but less is known about the role of visual speech information (facial and lip cues) on this process. It is also unclear if populations with different early sensory experiences (e.g. deaf children who receive cochlear implants; CIs) show the same pattern of neural responses during audiovisual (AV) spoken word recognition. Here we investigate ERP components corresponding to typical hearing (TH) and CI-using school age children’s sensory, phonological, and semantic neural responses during a picture-audiovisual word matching task. Children (TH n = 22; CI n = 13; ages 8 – 13 years) were asked to match picture primes and AV video targets of speakers naming the pictures. ERPs were time-locked to the onset of the target’s meaningful visual and auditory speech information. The results suggest that while CI and TH children may not differ in their sensory (Visual P1, Auditory N1) or semantic (N400, Late N400) responses, there may be differences in the intermediary components associated with either phonological or strategic processing. Specifically, we find an N280 response for the CI group and a P300 component in the TH group. Subjects’ ERPs are correlated with their age, hearing experience, task performance, and language measures. We interpret these findings in light of the unique strategies that may be employed by these two groups of children based on the utilization of different speech cues or task-level predictions. These findings better inform our understanding of the neural bases of AV speech processing in children, specifically where differences may emerge between groups of children with differential sensory experiences; the results have implications for improving spoken language access for children with cochlear implants.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View