Skip to main content
eScholarship
Open Access Publications from the University of California

Learning Relative Attribute Weights For Instance-Based Concept Descriptions

Abstract

Nosofsky recently described an elegant instance-based model (GCM) for concept learning that defined similarity (partly) in terms of a set of attribute weights. He showed that, when given the proper parameter settings, the G C M model closely fit his human subject data on classification performance. However, no algorithm was described for learning the attribute weights. The central thesis of the GCM model is that subjects distribute their attention amiong attributes to optimize their classification and learning performance. In this paper, we introduce two comprehensive process models based on the G C M . Our first model is simply an extension of the G C M that learns relative attribute weights. The GCM's learning and representational capabilities are limited - concept descriptions are assumed to be disjoint and exhaustive. Therefore, our second model is a further extension that learns a unique set of attribute weights for each concept description. Our empirical evidence indicates that this extension outperforms the simple G C M process model when the domain includes overlapping concept descriptions with conflicting attribute relevancies.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View