Skip to main content
eScholarship
Open Access Publications from the University of California

Virtual memories and Massive Generalization in Connectionist Combinatorial Learning

Abstract

W e report a series of experiments on connectionist learning that addresses a particularly pressing set of objections to Ihe plausibility of connectionist learning as a model of human learning. Connectionist models have typically suffered from rather severe problems of inadequate generalization(where generalizations are significantly fewer than training inputs) and interference of newly learned items with previously learned items. Taking a cue from the domains in which human learning dramatically overcomes such problems, we sec that indeed connectionist learning can escape these problems in combinatorially structured domains. In the simple combinatorial domain of letter sequences, w e find that a basic connectionist learning model trained on 50 6-letter sequences can correctly generalize to about 10,000 novel sequences. W e also discover that the model exhibits over 1,000,000 virtual memories: new items which, although not correctly generalized, can be learned in a few presentations while leaving performance on the previously learned items intact.W e conclude that connectionist learning is not as hannful to the empiricist position as previously reported experiments might suggest.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View