Skip to main content
eScholarship
Open Access Publications from the University of California

Language representations in L2 learners: Toward neural models

Creative Commons 'BY' version 4.0 license
Abstract

We investigated how the language background (L1) of bilinguals influences the representation and use of the second language (L2) through computational models. With the essays part from The International Corpus Network of Asian Learners of English (ICNALE), we compared variables indicating syntactic complexity in their L2 production to predict L1. We then trained neural language models based on BERT to predict the L1 of these English learners. Results showed the systematic influence of L1 syntax properties on English learners' L2 production, which further confirmed integrations of syntactic knowledge across languages in bilingual speakers. Results also found neural models can learn to represent and detect such L1 impacts, while multilingually trained models have no advantage in doing so.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View