Skip to main content
eScholarship
Open Access Publications from the University of California

Neural-agent Language Learning and Communication: Emergence of Dependency Length Minimization

Creative Commons 'BY' version 4.0 license
Abstract

Natural languages tend to minimize the linear distance between heads and their dependents in a sentence, known as dependency length minimization (DLM). Such a preference, however, has not been consistently replicated with neural agent simulations. Comparing the behavior of models with that of human learners can reveal which aspects affect the emergence of this phenomenon. This work investigates the minimal conditions that may lead neural learners to develop a DLM preference. We add three factors to the standard neural-agent language learning and communication framework to make the simulation more realistic, namely: (i) the presence of noise during listening, (ii) context-sensitivity of word use, and (iii) incremental sentence processing. While no preference appears in production, we show that the proposed factors contribute to a small but significant learning advantage of DLM for listeners of verb-initial languages. Our findings offer insights into essential elements contributing to DLM preferences in purely statistical learners.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View