Skip to main content
Open Access Publications from the University of California

Department of Linguistics

Working Papers in Phonetics bannerUCLA

WPP, No.111: Focus, prosody, and individual differences in “autistic” traits: Evidence from cross-modal semantic priming

  • Author(s): Bishop, Jason
  • et al.

The present study explored listeners’ expectations about how prosodic prominence can be used to disambiguate information structure in English. In particular, the contribution of prenuclear accents to the prosodic disambiguation of the size of the focus constituent (broad VP vs. narrow object focus) in SVO constructions was tested using the cross-modal priming paradigm. In two experiments, listeners were presented with visual targets (e.g., “brunette”) following contrastively related primes (e.g., “blonde”), which were heard as objects in SVO sentences (e.g., “He kissed a blonde.”). In Experiment 1, listeners heard the sentences produced with a single pitch accent on the object, and the focus structure varied from broad VP focus to narrow object focus. No significant differences in priming patterns across conditions were found, supporting theories of Focus Projection (e.g., Selkirk 1995, Gussenhoven 1984), which predict prenuclear accents to be optional. In Experiment 2, the information structure of the sentences was held constant as narrow object focus, and their prosody varied with respect to the presence of a prenuclear pitch accent on the verb. For these narrow focus sentences, it was found that priming occurred only when the sentence lacked a prenuclear accent, suggesting that prenuclear pitch accents contribute meaningfully to the information structural contrast. Sensitivity to the prosodic manipulation, however, was found to be modulated by individual differences in listeners’ “autistic” traits. The implications for on-line lexical processing and theories of themapping between prosody and information structureare discussed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View