Skip to main content
eScholarship
Open Access Publications from the University of California

Speech Processing does not Involve Acoustic Maintenance

Abstract

What happens to the acoustic signal after it enters the mind of a listener during real-time speech processing? Sinceprocessing involves extracting linguistic evidence from multiple, temporally distinct sources of information, successfulcommunication relies on a listeners ability to combine these potentially disparate signals. Previous work has shown thatlisteners are able to maintain, and rationally update, some type of intermediate representations over time. However, exactlywhat type of information is being maintainedbe it acoustic-phonetic or rather a probability distribution over phonemeshasbeen underspecified. In this paper we present a perception experiment aimed at identifying the internal contents of in-termediate representations in speech processing. Using an accent-adaptation paradigm, we find that listeners adapt tomodulated acoustic signal when the corresponding orthography is provided before the audio, but not when audio followsthe orthography. This supports the position that intermediate representations are uncertainty-distributions over discreteunits (e.g. phonemes) and that, by default, speech processing involves no maintenance of the acoustic-phonetic signal.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View