Skip to main content
eScholarship
Open Access Publications from the University of California

Can audio-visual integration, adaptive learning, and explicit feedback improve theperception of noisy speech?

Abstract

The perception of degraded speech input is essential in everyday life and is a major challenge in a variety of clinicalsettings, including for cochlear implant users. We investigated English speakers perception of noisy speech via an audio-visual lexical decision paradigm that modulated cross-modal integration, adaptive modulation of task difficulty, and ex-plicit feedback on response accuracy. We then tested whether proficiency with this task transferred to the perception ofnoisy audio stimuli in a post test. Although we observed a processing advantage for bimodal stimuli during training,particularly in the adaptive training condition, we did not observe any benefit from these conditions in the post test, nor abenefit associated with providing explicit feedback. These results are discussed in relation to other studies of audio-visualintegration and learning to perceive noisy speech, which may have observed different results due to more extensive trainingand different baseline proficiency levels.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View