- Main
Cross-language structural priming in recurrent neural network language models
Abstract
Recurrent neural network (RNN) language models that are trained on large text corpora have shown a remarkable ability to capture properties of human syntactic processing (Linzen & Baroni, 2021). For example, the fact that these models display human-like structural priming effects (Prasad, Van Schijndel, & Linzen, 2019; van Schijndel & Linzen, 2018) suggests that they develop implicit syntactic representations that may not be unlike those of the human language system. A rarely explored question is whether RNNs are also able to simulate aspects of human multilingual sentence processing (Frank, 2021) even though training RNNs on two or more languages simultaneously is technically unproblematic.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-