A Computational Model for Learning Structured Concepts From Physical Scenes
Skip to main content
eScholarship
Open Access Publications from the University of California

A Computational Model for Learning Structured Concepts From Physical Scenes

Abstract

Category learning is an essential cognitive mechanism for making sense of the world. Many existing computational category learning models focus on categories that can be represented as feature vectors, and yet a substantial part of the categories we encounter have members with inner structure and inner relationships. We present a novel computational model that perceives and learns structured concepts from physical scenes. The perception and learning processes happen simultaneously and interact with each other. We apply the model to a set of physical categorization tasks and promote specific types of comparisons by manipulating presentation order of examples. We find that these manipulations affect the algorithm similarly to human participants that worked on the same task. Both benefit from juxtaposing examples of different categories – especially ones that are similar to each other. When juxtaposing examples from the same category they do better if the examples are dissimilar to each other.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View