Skip to main content
eScholarship
Open Access Publications from the University of California

Methods for Learning Articulated Attractors over Internal Representations

Abstract

Recurrent attractor networks have many virtues which have prompted their use in a wide variety of connectionist cognitive models. One of these virtues is the ability of these networks to leam articulated attractors — meaningful basins of attraction arising from the systematic interaction of explicitly trained patterns. Such attractors can improve generalization by enforcing "well formedness" constraints on representations, massaging noisy and ill formed patterns of activity into clean and useful patterns. This paper investigates methods for learning articulated attractors at the hidden layers of recurrent backpropagation networks. It has previously been shown that standard connectionist learning techniques fail to form such structured attractors over internal representations. To address this problem, this paper presents two unsupervised learning rules that give rise to componential attractor structures over hidden units. The performance of these learning methods on a simple structured memory task is analyzed.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View