Skip to main content
eScholarship
Open Access Publications from the University of California

Improving Associative Memory Capacity: One-Shot Learning in Multilayer Hopfield Networks

Abstract

Our brains have an extraordinarily large capacity to store and recognize complex patterns after only one or a very few exposures to each item. Existing computational learning algorithms fall short of accounting for these properties of human memory; they either require a great many learning iterations, or they can do one-shot learning but suffer from very poor capacity. In this paper, we explore one approach to improving the capacity of simple Hebbian pattern associators: adding hidden units. We propose a deterministic algorithm for choosing good target states for the hidden layer. In assessing performance of the model, we argue that it is critical to examine both increased stability and increased basin size of the attractor around each stored pattern. Our algorithm achieves both, thereby improving the network's capacity to recall noisy patterns. Further, the hidden layer helps to cushion the network from interference effects as the memory is overloaded. Another technique, almost as effective, is to "soft-clamp" the input layer during retrieval. Finally, we discuss other approaches to improving memory capacity, as well the relation between our model and extant models of the hippocampal system.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View