Harmonics co-occurrences bootstrap pitch and tonality perception in music: Evidence from a statistical unsupervised learning model
Skip to main content
eScholarship
Open Access Publications from the University of California

Harmonics co-occurrences bootstrap pitch and tonality perception in music: Evidence from a statistical unsupervised learning model

Abstract

The ability to extract meaningful relationships from sequences is crucial to many aspects of perception and cognition, such as speech and music. This paper explores how leading computational techniques may be used to model how humans learn abstract musical relationships, namely, tonality and octave equivalence. Rather than hard-coding musical rules, this model uses an unsupervised learning approach to glean tonal relationships from a musical corpus. We develop and test a novel input representation technique, using a perceptually-inspired harmonics-based representation, to bootstrap the model’s learning of tonal structure. The results are compared with behavioral data from listeners’ performance on a standard music perception task: the model effectively encodes tonal relationships from musical data, simulating expert performance on the listening task. Lastly, the results are contrasted with previous findings from a computational model that uses a more simple symbolic input representation of pitch.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View