Skip to main content
eScholarship
Open Access Publications from the University of California

Integrating Causal learning Rules with Backpropagation in PDS Networks

Abstract

This paper presents a method for training PDP networks that, unlike backpropagation, does not require excessive amounts of training data or massive amounts of training time to generate appropriate generalizations. The method that we present uses general conceptual knowledge about cause-and-effect relationships within a single training instance to constrain the number of possible generalizations. W e describe how this approach has been previously implemented in rule based systems and we present a method for implementing the rules within the framework of Parallel Distributed Semantic (PDS) Networks, which use multiple PDP networks structured in the form of a semantic network. Integrating rules about causality with backprop in PDS Networks retains the advantages of PDP , while avoiding the problems of enormous numbers of training instances and excessive amounts of training time.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View