Skip to main content
eScholarship
Open Access Publications from the University of California

A multitask model of concept learning and generalization

Abstract

Human cognition is highly flexible– even when posed with novel questions and situations, we are able to manipulate our existing knowledge to draw reasonable conclusions. All human cognition requires flexibility, yet we lack a well-justified computational explanation for how humans might learn and manipulate conceptual knowledge in a way that allows for cognitive flexibility. Here, we develop and test a neural network model of how humans learn and use concept representations. The core of this model frames concepts as latent vector representations that are learned through observations across multiple context domains. The architecture we propose gives rise to a natural mechanism for generalization of conceptual knowledge between familiar domains. This work integrates findings and methods across cognitive science, neuroscience, and machine learning, and holds promise to advance the understanding of conceptual representations within each of these fields.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View