Skip to main content
eScholarship
Open Access Publications from the University of California

REMIND: Integrating Language Understanding and Episodic Memory Retrieval in a Connectionist Network

Abstract

Most AI simulations have modeled memory retrieval separately from language understanding, even though both activities seem to use many of the same processes. This paper describes REMIND, a structured reading-activation model of integrated text comprehension and episodic reminding. In REMIND , activation is spread through a semantic network that performs dynamic inferencing and disambiguation to infer a conceptual representation of an input cue. Because stored episodes are associated with the concepts used to understand them. the spreading-activation process also activates any memory episodes that share features (X knowledge structures with the cue. After a conceptual representation is formed of the cue, the episode in the network with the highest activation is recalled from memory. Since the inferences made from a cue often include actors' plans and goals only implied in its text. REMIND is able to get abstract remindings that would not be possible without an integrated understanding and revival model.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View