When learners are exposed to inconsistent input, do they reproduce the probabilities in the input (probability match-ing), or produce some variants disproportionately often (regularization)? Laboratory results and computational models ofartificial language learning both argue that the learning mechanism is basically probability matching, with regularization aris-ing from additional factors. However, these models were fit to aggregated experimental data, which can exhibit probabilitymatching even if all individuals regularize. To assess whether learning can be accurately characterized as basically probabilitymatching or systematizing at the individual level, we ran a large-scale experiment. We found substantial individual variation.The structure of this variation is not predicted by recent beta-binomial models. We introduce a new model, the Double ScalingSigmoid (DSS) model, fit its parameters on a by-participant basis, and show that it captures the patterns in the data. Priorexpectations in the DSS are abstract, and do not entirely represent previous experience.