- Main
Constructing flexible feature representations using nonparametric Bayesian inference
- Austerweil, Joseph Larry
- Advisor(s): Griffiths, Thomas
Abstract
Representations are a key explanatory device used by cognitive psychologists to account for human behavior. However, little is known about how experience and context affect the representations people use to encode a stimulus. Understanding the effects of context and experience on the representations people use is essential because if two people encode the same stimulus using different representations, their response to that stimulus may be different. First, we present a mathematical framework that can be used to define models that flexibly construct feature representations (where by a feature we mean a part of the image of an object) for a set of observed objects, based on nonparametric Bayesian statistics. An initial model constructed in this framework captures how the distribution of parts and learning categories affects the features people use to represent a set of objects. Next, we build on this work in three ways. First, although people use features that can be transformed on each observation (e.g., translated on the retinal image), many existing feature learning models can only recognize features that are not transformed (occur identically each time). Consequently, we extend the initial model to infer features that are invariant over a set of transformations, and learn different structures of dependence between feature transformations. Second, we compare two possible methods for capturing the manner that categorization affects feature representations. Third, we present a model that learns features incrementally, capturing an effect of the order of object presentation on the features people learn. Finally, we conclude by considering the implications and limitations of our empirical and theoretical results.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-