Skip to main content
eScholarship
Open Access Publications from the University of California

The driving forces of polarity-sensitivity: Experiments with multilingual pre-trained neural language models

Abstract

Polarity-sensitivity is a typologically general linguistic phenomenon. We focus on negative polarity items (NPIs, e.g. English 'any') -- expressions that are licensed only in negative contexts. The relevant notion of 'negative context' could be defined lexically, syntactically or semantically. There is psycholinguistic evidence in favour of semantics as a driving factor for some NPIs in a couple of languages (Chemla, Homer, & Rothschild, 2011; Denić, Homer, Rothschild, & Chemla, 2021). Testing the scale of this analysis as a potential cross-linguistic universal experimentally is extremely hard. We turn to recent multilingual pre-trained language models -- multilingual BERT (Devlin, Chang, Lee, & Toutanova, 2018) and XLM-RoBERTa (Conneau et al., 2019) -- and evaluate the models' recognition of polarity-sensitivity and its cross-lingual generality. Further, using the artificial language learning paradigm, we look for the connection in neural language models between semantic profiles of expressions and their ability to license NPIs. We find evidence for such connection for negation but not for other items we study.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View