Skip to main content
eScholarship
Open Access Publications from the University of California

Prior Expectations in Linguistic Learning: A Stochastic Model of IndividualDifferences

Abstract

When learners are exposed to inconsistent input, do they reproduce the probabilities in the input (probability match-ing), or produce some variants disproportionately often (regularization)? Laboratory results and computational models ofartificial language learning both argue that the learning mechanism is basically probability matching, with regularization aris-ing from additional factors. However, these models were fit to aggregated experimental data, which can exhibit probabilitymatching even if all individuals regularize. To assess whether learning can be accurately characterized as basically probabilitymatching or systematizing at the individual level, we ran a large-scale experiment. We found substantial individual variation.The structure of this variation is not predicted by recent beta-binomial models. We introduce a new model, the Double ScalingSigmoid (DSS) model, fit its parameters on a by-participant basis, and show that it captures the patterns in the data. Priorexpectations in the DSS are abstract, and do not entirely represent previous experience.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View