We present evidence that successful chunk formation during a
statistical learning task depends on how well the perceiver is
able to parse the information that is presented between
successive presentations of the to-be-learned chunk. First, we
show that learners acquire a chunk better when the
surrounding information is also chunk-able in a visual
statistical learning task. We tested three process models of
chunk formation, TRACX, PARSER, and MDLChunker, on
our two different experimental conditions, and found that only
PARSER and MDLChunker matched the observed result.
These two models share the common principle of a memory
capacity that is expanded as a result of learning. Though
implemented in very different ways, both models effectively
remember more individual items (the atomic components of a
sequence) as additional chunks are formed. The ability to
remember more information directly impacts learning in the
models, suggesting that there is a positive-feedback loop in
chunk learning.