Skip to main content
eScholarship
Open Access Publications from the University of California

Learning Context-free Grammars: Capabilities and Limitations of a Recurrent Neural Network with an External Stack Memory

Abstract

This work describes an approach for inferring Deterministic Context-free (DCF) Grammars in a C!onnectionist pjiradigm using a Recurrent Neural Network Pushdown Automaton (NNPDA). The N N P D A consists of a recurrent neural network connected to an external stack memory through a common error function. W e show that the N N P D A is able to learn the dynamics of an underlying pushdown automaton from examples of grammatical and non-grammatical strings. Not only does the network learn the state transitions in the automaton, it also learns the actions required to control the stack. In order to use continuous optimization methods, we develop an analog stack which reverts to a discrete stack by quantization of all activations, after the network has learned the transition rules and stack actions. W e further show an enhancement of the network's learning capabilities by providing hints. In addition, an initial comparative study of simulations with first, second and third order recurrent networks has shown that the increased degree of freedom in a higher order networks improve generalization bu not necessarily learning speed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View