Skip to main content
eScholarship
Open Access Publications from the University of California

The relative amount of information contributed by learning and bypre-specification in a SRN trained to compute sameness

Abstract

We analyze the conditions under which Simple Recurrent Networks learn and generalize sameness. This task is difficultfor a generic SRN, and several properties of the network have to be established previous to any learning for generalizationto occur. We show that by selecting a set of narrow weight intervals a network can learn sameness from a limited set ofexamples. The intervals depend on the particular training set, and we obtained them from a series of simulations usingthe complete training set. We can approximate the relative amount of information provided by the initial structure andthe amount provided by the examples. Although we did not arrive to a general rule, in all our cases the initial structureprovides much more information than the examples. This shows that if something similar to ANN operates in the brain, arich innate structure is needed to support the learning of general functions.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View