Skip to main content
eScholarship
Open Access Publications from the University of California

Experiments With Sequential Associative Memories

Abstract

Humans are very good at manipulating sequential information, but sequences present special problems for connectionist models.As an approach to sequential problems we have examined totally connected subnetworks of cells called sequential associative memories (SAM's). The coefficients for S A M cells are unmodifiable and are generated at random.A subnetwork of S A M cells performs two tasks:1. Their activations determine a state for the network that permits previous inputs andoutputs to be recalled, and2. They increase the dimensionality of input and output representations to make it possible for other (modifiable) cells in the network to learn difficult tasks.The second function is similar to the distributed method, a way of generating intermediate cells for non-sequential problems.Results from several experiments are presented. The first is a robotic control task that required a network to produce one of several sequences of outputs when input cells were set to a corresponding 'plan number'.The second experiment was to learn a sequential version of the parity function that would generalize to arbitrarily long input strings.Finally we attempted to teach a network how to add arbitrarily long pairs of binary numbers. Here we were successful if the network contained a cell dedicated to the notion of' carry'; otherwise the network performed at less than 100% for unseen sequences longer than those used during training.Each of these tasks required a representation of state, and hence a network with feedback.All were learned using subnetworks of S A M cells.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View