Skip to main content
eScholarship
Open Access Publications from the University of California

An Entropy Model of Artifical Grammar Learning

Abstract

We propose a model to characterize the type of knowledge acquired in Artificial Grammar Learning (AGL). In particular, we suggest a way to compute the complexity of different test items in an AGL task, relative to the training items, based on the notion of Shannon entropy: The more predictable a test item is from training items, the higher the likelihood that it will be selected as compatible to the training items. Our model is an attempt to formalize some aspects of inductive inference by providing a quantitative measure of the knowledge abstracted by experience. We motivate our particular approach from research in reasoning and categorization, where reduction of entropy has also been seen as a plausible cognitive objective. This may suggest that reducing (Shannon) uncertainty may provide a single explanatory framework for modeling as diverse aspects of cognition, as learning, reasoning, and categorization.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View