Skip to main content
eScholarship
Open Access Publications from the University of California

The (Non)Necessity of Recursion in Natural Language Processing

Abstract

The prima facie unbounded nature of natural language, contrasted with the finite character of our memory and computational resources, is often taken to warrant a recursive language processing mechanism. The widely held distinction between an idealized infinite grammatical competence and the actual finite natural language performance provides further support for a recursive processor. In this paper, I argue that it is only necessary to postulate a recursive language mechanism insofar as the competence/performance distinction is upheld. However, I provide reasons for eschewing the latter and suggest that only data regarding observable linguistic behaviour ought to be used when modeling the human language mechanism. A connectionist model of language processing—the simple recurrent network proposed by Elman—is discussed as an example of a non-recursive alternative and I conclude that the computational power of such models promises to be sufficient to account for natural language behaviour.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View