The model discussed here is offered as a prototype of the use
of a computational model to explore alternate hypotheses and to
suggest possible answers to some of the questions which have been
addressed in the study of language acquisition. Why does not the
child end up with an overly generalized grammar or lexicon? There
is much evidence concerning the kinds of generalizations and
over-generalizations that children make. However if we permit no
overt and specific correction of the child's errors, then how is it
that errors of over-generalization do not persist into adult speech?
One answer to this question is proffered by attaching a system of
weights to hypotheses. There are two related problems to be solved.
Some mechanism in the model must allow erroneous hypotheses to be
corrected; in addition there must be a way that more mature
constructs can replace earlier ones. The model accomplishes these
two tasks by means of a system of weights which represent confidence
values and recency values. By this system more frequently matched
constructs are preferred over less frequently matched constructs,
and more recent hypotheses are favored for testing. This learning
paradigm is illustrated by a set of procedures for learning the past
tense of verbs in English. The scheme has the advantage that for a
period of time when confidence factors are approximately in balance
two or more constructs can co-exist. Thus we need not talk of rules
or individual cases which have been learned or have not yet been
learned but rather of a continuum in which rule schemas are either
strong or weak.