Learning as a function of task complexity was examined in human learning and two connectionist simulations. An example task involved learning to map basic input/output digital logic functions for six digital gates (AND OR, X O R and negated versions)with 2- or 6- inputs. Humans given instruction learned the task in about 300 trials and showed no effect of the number of inputs. Back propagation learning in a network with 20 hidden units required 68,000trials and scaled poorly, requiring 8 times as many trials to learn the 6-input gates as to learn the 2-inputgates. A second simulation combined backpropagation with task division based upon rules humans use to perform the task. The combined approach improved the scaling of the problem, learning in 3,100trials and requiring about 3 times as many trials to learn the 6-input gates as to learn the 2-input gates.Issues regarding scaling and augmenting connectionist learning with rule-based instruction are discussed.