Skip to main content
eScholarship
Open Access Publications from the University of California

Hebbian Learning of Artificial Grammars

Abstract

A connectionist model is presented that used a hebbian learning rule to acquire knowledge about an artificial grammar (AG). The validity of the model was evaluated by the simulation of two classic experiments from the A G learning literature. The first experiment showed that human subjects were significantly better at learning to recall a set of strings generated by an A G , than by a random, process. The model shows the same pattern of performance. The second experiment showed that human subjects were able to generalize the knowledge they acquired during A G learning to novel strings generated by the same grammar. The model is also capable of generalization, and the percentage of errors made by human subjects and by the model are qualitatively and quantitatively very similar. Overall, the model suggests that hebbian learning is a viable candidate for the mechanism by which human subjects become sensitive to the regularities present in AG's. From the perspective of computational neuroscience, the implications of the model for implicit learning theory . as well as what the model may suggest about the relationship between implicit and explicit memory, are discussed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View