Skip to main content
eScholarship
Open Access Publications from the University of California

Cross-language structural priming in recurrent neural network language models

Creative Commons 'BY' version 4.0 license
Abstract

Recurrent neural network (RNN) language models that are trained on large text corpora have shown a remarkable ability to capture properties of human syntactic processing (Linzen & Baroni, 2021). For example, the fact that these models display human-like structural priming effects (Prasad, Van Schijndel, & Linzen, 2019; van Schijndel & Linzen, 2018) suggests that they develop implicit syntactic representations that may not be unlike those of the human language system. A rarely explored question is whether RNNs are also able to simulate aspects of human multilingual sentence processing (Frank, 2021) even though training RNNs on two or more languages simultaneously is technically unproblematic.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View