Skip to main content
eScholarship
Open Access Publications from the University of California

Inversting A Connectionist Network Mapping By Back-Propagation of Error

Abstract

The back-propagation learning algorithm (Rumelhart, Hintoa, & Williams, 1986) for connectionist networks works by adjusting the weights along the negative of the gradient in weight space of astandard error measure. The back-propagation technique is simply an efficient and entirely local means of computing this gradient. Using what is essentially the same back-propagation scheme, onemay instead compute the gradient of this error measure in the space of input activation vectors; thisgives rise to an algorithm for inverting the mapping performed by a network with specified weights.In this case the error is propagated back to the input units and it is the activations of these units —rather than the values of the weights in the network — that are adjusted so that a specified outputpattern is evoked. This technique is illustrated here with a small network which is a much simplifiedversion of the NETtalk text-to-specch network studied by Sejnowski and Rosenburg (1986). The ideais to run this network backward so that it attempts to spell words based on their phonetic representations. This example further illustrates the use of this technique in a sequential interpretation settingin which phonemes are presented to the system one at a time and the system must refine its previousguess at the correct spelling as each new phoneme is presented.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View