The connectionist model lAK (Information evaluation using
configurations) for classification learning is presented here.
The model can be placed between feature based (e.g. Gluck
& Bower, 1988) and exemplar based models (e.g. ALCOVE ,
Kruschke, 1992). Specific to this model is that during
learning, sets of input features are probabilistically sampled.
These sets are represented, in a hidden layer, by
configuration nodes. These configuration nodes are
connected to output nodes that represent category labels. A
further characteristic of the lAK model is a mechanism
which enhances retrieval of information. Simulations with
the lAK model can explain different phenomena of
classification learning which have been found in
experimental studies: A Type 2 advantage without
dimensional attention learning observed by Shepard et al.
(1961); a generalisation of prototypes; a generalization based
on similarity to learned exemplars; a differential forgetting
of prototypes and exemplars; a moderate interference (fan
effect) caused by stimulus similarity; and the missing of
catastrophic interference even in A-B/A-Brdesigns.