Skip to main content
eScholarship
Open Access Publications from the University of California

Revisiting the poverty of the stimulus: hierarchical generalization without a hierarchical bias in recurrent neural networks

Abstract

Syntactic rules in natural language typically need to make ref-erence to hierarchical sentence structure. However, the simpleexamples that language learners receive are often equally com-patible with linear rules. Children consistently ignore theselinear explanations and settle instead on the correct hierarchi-cal one. This fact has motivated the proposal that the learner’shypothesis space is constrained to include only hierarchicalrules. We examine this proposal using recurrent neural net-works (RNNs), which are not constrained in such a way. Wesimulate the acquisition of question formation, a hierarchicaltransformation, in a fragment of English. We find that someRNN architectures tend to learn the hierarchical rule, suggest-ing that hierarchical cues within the language, combined withthe implicit architectural biases inherent in certain RNNs, maybe sufficient to induce hierarchical generalizations. The like-lihood of acquiring the hierarchical generalization increasedwhen the language included an additional cue to hierarchy inthe form of subject-verb agreement, underscoring the role ofcues to hierarchy in the learner’s input.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View