Skip to main content
eScholarship
Open Access Publications from the University of California

Disrupting naturalistic temporal structure impacts musical memory, prediction, and segmentation

Creative Commons 'BY' version 4.0 license
Abstract

The brain represents acoustic structure in speech and music hierarchically, with lower-order regions processing shorter-timescale information (e.g., syllables, notes) and higher-order regions processing longer timescales (e.g., sentences, musical phrases). However, it is not known to what extent this neural hierarchy reflects differences in cognitive processing of these distinct timescales. In a behavioral experiment, musician and non-musician participants heard naturalistic piano music scrambled at four temporal levels: 1-measure, 2-measure, 8-measure, and fully intact. We found that participants more accurately remembered and predicted musical information within a more intact musical context than a less intact one (p < .001). Stimuli had a consistent timbre and tempo, indicating that listeners were sensitive to tonal and rhythmic structure, which likely boosted their processing of the more intact, cohesive stimuli. Highly-trained musicians also outperformed non-musicians only in the memory task (p < .05), which suggests that implicit knowledge may be more useful for prediction.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View