Skip to main content
eScholarship
Open Access Publications from the University of California

Trees neural those: RNNs can learn the hierarchical structure of noun phrases

Abstract

Humans use both linear and hierarchical representations in language processing, and the exact role of each has been debated. One domain where hierarchical processing is important is noun phrases. English noun phrases have a fixed order of prenominal modifiers: demonstratives - numerals - adjectives (these two green vases). However, when English speakers learn an artificial language with postnominal modifiers, instead of reproducing this linear order they preserve the distance between each modifier and the noun (vases green two these). This has been explained by a hierarchical homomorphism bias. Here, we investigate whether RNNs exhibit this bias. We pre-train one linear and two hierarchical models on English and expose them to a small artificial language. We then test them on noun phrases from a study with humans and find that only the hierarchical models can exhibit the bias, supporting the idea that homomorphic word order preferences arise from hierarchical, and not linear relations.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View