A Connectionist Model Of Instruction Following
Skip to main content
eScholarship
Open Access Publications from the University of California

A Connectionist Model Of Instruction Following

Abstract

In this paper we describe a general connectionist model of "learning by being told". Unlike common network models of inductive learning which rely on the slow modification of con?nection weights, our model of instructed learning focuses on rapid changes in the activation state of a recurrent network. W e view stable distributed patterns of activation in such a network as internal representations of provided advice - representations which can modulate the behavior of other networks. W e sug?gest that the stability of these configurations of activation can arise over the course of learning an instructional language and that these stable pattems should appear as articulatedattractors in the activation space of the recurrent network. In addition to proposing this general model, we also report on the results of two computational experiments. In the first, networks are taught to respond appropriately to direct instruction concerning a simple mapping task. In the second, networks receive instruc?tions describing procedures for binary arithmetic, and they are trained to immediately implement the specified algorithms on pairs of binary numbers. While the networks in these prelim?inary experiments were not designed to embody the attractor dynamics inherent in our general model, they provide support for this approach by demonstrating the ability of recurrent back?propagation networks to learn an instructional language in the service of some task and thereafter exhibit prompt instruction following behavior.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View