We investigated neural networks’ ability to generalize during visual object recognition. In three experiments, we show that while basic multilayer neural networks easily learn to classify the objects on which they are trained, they show serious difficulties transferring that knowledge to novel items. However, our experiments also show that when the previously trained networks are then trained on the novel items, they learn to respond correctly to the novel items much faster than untrained networks. This shows that these networks are learning abstract representations that go beyond the simple items on which they were trained. We argue that this demonstrates that regarding abstract rule learning, the problem with neural networks is not their inability to learn abstractions, but their ability to apply that knowledge when classifying new objects.