Skip to main content
eScholarship
Open Access Publications from the University of California

Simple Recurrent Networks and Natural Language: How Important is Starting Small

Abstract

Prediction is believed to be an important component of cognition, particularly in natural language processing. It has long been accepted that recurrent neural networks are best able to learn prediction tasks when trained on simple examples before incrementally proceeding to more complex sentences. Furthermore, the counter-intuitive suggestion has been made that networks and, by implication, humans may be aided in learning by limited cognitive resources (Elman, 1993, Cognition). The current work reports evidence that starting with simplified inputs is not necessary in training recurrent networks to learn pseudo-natural languages; in fact, delayed introduction of complex examples is often an impediment. We suggest that the structure of natural language can be learned without special teaching methods or limited cognitive resources.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View