Skip to main content
eScholarship
Open Access Publications from the University of California

Learning abstractions from discrete sequences

Abstract

Understanding abstraction is a stepping stone towards understanding intelligence. We ask the question: How do abstract representations arise when learning sequences? From a normative perspective, we show that abstraction is necessary for an intelligent agent when the perceptual sequence contains objects of similar interaction properties appearing in identical contexts. A rational agent should identify categories of objects of similar properties as an abstract concept, enabling the discovery of higher-order sequential relations that span a longer part of the sequence. We propose a hierarchical variable learning model (HVM) that learns chunks and abstract concepts from sequential data in a cognitively plausible manner. HVM gradually discovers abstraction via a conjunction of variable discovery and chunking, resembling the process of concept discovery during development. In a sequence recall experiment that demands learning and transferring variables, we observe that the model's sequence complexity can explain human behavior in a sequence memorization experiment.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View