Visual thinking plays a central role in human cognition, yet weknow little about the algorithmic operations that make itpossible. Starting with outputs of a JIM-like model of shapeperception, we present a model that generates object file-likerepresentations that can be stored in memory for futurerecognition, and can be used by a LISA-like inference engineto reason about those objects. The model encodes structuralrepresentations of objects on the fly, stores them in long termmemory, and simultaneously compares them to previouslystored representations in order to identify candidate sourceanalogs for inference. Preliminary simulation results suggestthat the representations afford the flexibility necessary forvisual thinking. The model provides a starting point forsimulating not only object recognition, but also reasoningabout the form and function of objects.