Skip to main content
eScholarship
Open Access Publications from the University of California

Cross-modal perceptual learning in learning a tonal language

Abstract

Limited evidence shows that visual input can facilitate learning novel sound-to-meaning mappings that are crucial to learning a second language. However, the mechanisms by which visual information influences auditory learning are still unclear. Here, we investigate to what extent visual input can lead to effective learning in another domain. We trained atonal speakers with Mandarin tones in 4 conditions: Auditory Only (AO) where only auditory tones were given as input; Animated Contour (AC) where moving visual pitch contours indicating the dynamic changes of tones were given in addition to auditory tones; Static Contour (SC) where static visual pitch contours were given in addition to auditory tones; Incongruent Contour (IC) where mismatched pitch contours were given in addition to auditory tones. The results show the advantage of AC and SC over AO in learning tonal categories and that IC inhibits learning, suggesting that extracting ‘compatible’ properties cross modalities benefits learning most.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View