- Main
Minimal Generative Explanations: A Middle Ground between Neurons and Triggers
Abstract
This paper describes a class of procedures for discovering linguistic structure, along with some specific procedures and measures of their effectiveness. This approach is well-suited to problems like learning the forms of words from connected speech, learning word formation rules, and learning phonotactic constraints and phonological rules. These procedures acquire a symbolic representation, such as a list of word forms, a list of morphemes, or a set of context sensitive rules, each of which serves as the language-particular component of a generative grammar. Each procedure considers only a clearly defined set of possible generative grammars. This hypothesis space can be thought of as the procedure's "universal grammar". Procedures are evaluated for effectiveness by computer simulation on input consisting of naturally occurring language. Thus, they must be robust. That is, small changes to the input must lead to little or no change in the conclusions. This research program resembles the connectionist program in its focus on phenomena like wordsegmentation, morphology, and phonology, its emphasis on robustness, and its reliance on computer simulation. However, it is closer to parameter setting and learnability theory in its focus on learning generative grammars selected from a clearly defined hypothesis space, or "universal grammar". Further, to the extent that connectionism is about neural implementations while parameter setting and learnability theory are about universal grammars, the study of effective procedures for language acquisition stands at an intermediate level of abstraction.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-