This dissertation introduces a biologically-detailed computational model of how rule-guided behaviors become automatic. The model assumes that initially, rule-guided behaviors are controlled by a distributed neural network centered in the prefrontal cortex, and that in addition to initiating behavior, this network also trains a faster and more direct network that includes projections from sensory association cortex directly to rule-sensitive neurons in premotor cortex. After much practice, the direct network is sufficient to control the behavior, without prefrontal involvement. The model is implemented as a biologically-detailed neural network constructed from spiking neurons and displaying a biologically plausible form of Hebbian learning. The model successfully accounts for single-unit recordings and human behavioral data that are problematic for other models of automaticity.
The dissertation also presents the results from two experiments investigating the nature of what is automatized after lengthy practice with a rule-guided behavior. The results of both experiments suggest that an abstract rule, if interpreted as a verbal-based strategy, was not automatized during training, but rather the automatization linked a set of stimuli with similar values on one visual dimension to a common motor response. The experiments were designed to test the Cortex Automatizes Rules Model. The present results support this model and suggest that the projections from visual cortex to prefrontal and premotor cortex are restricted to visual representations of the relevant stimulus dimension only.