Skip to main content
eScholarship
Open Access Publications from the University of California

Spectrotemporal cues and attention modulate neural networks for speech and music

Creative Commons 'BY' version 4.0 license
Abstract

Speech and music are fundamental human communication modes. To what extent they rely on specific brain networks or exploit general auditory mechanisms based on their spectrotemporal acoustic structure is debated. We aimed at defining connectivity patterns modulated by attention to auditory content and spectrotemporal information, using fMRI. Participants tried to recognise sung speech stimuli that were gradually deprived of spectral or temporal information. Although auditory cortices appeared to specialise in temporal (left) and spectral (right) encoding, modularity of the bilateral connectivity network was largely unaffected by spectrotemporal degradations or attention. However, while participants' recognition decreased when necessary acoustic information - spectral for attention to melodies, temporal for attention to sentences - was degraded, efficiency of information flow in the network increased, and different subnetworks emerged. This suggests that the loss of crucial spectral (melody) or temporal (speech) information is compensated within the network by recruiting more and differential neural ressources.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View