Seeing speech: Cerebral mecanisms of Cued Speech perception
Skip to main content
eScholarship
Open Access Publications from the University of California

Seeing speech: Cerebral mecanisms of Cued Speech perception

Abstract

Most alphabets are based on visual coding of phonemes and syllables, and similar visual codes were developed for visually conveying the sounds of speech to deaf people. Notably, Cued Speech (CS) allows for syllables to be specified by a combination of lip configuration, hand location and hand shape. The use of this communication system has been proven to improve general language skills in a deaf community characterized by low literacy. Meanwhile, the mechanisms of CS perception remain largely unknown. In an fMRI study involving 3 groups of participants (deaf and hearing people proficient in CS and a group of hearing people naïve to CS), we identify the brain areas that process and, more specifically, encode the various components of CS. Particular attention is given to the role of expertise, and to the links between CS and reading, two coexisting visual codes for language that both compete and support each other.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View