This paper describes a working, computational model of word recognition that combines a letter classification component with a component that segments the string of classified letters into words and uses a dynamic programming method for matching the words against a lexicon of over 2,800 words. The letter classification component is a neural network trained to classify, in parallel, inputs corresponding to 20x188 pixel array images of letter sequences, 14 or more letters long. Consistent with human capabilities, the system can classify all 14 letters at a level above chance, and on average, classifies the first 7 or 8 letters in the sequence correctly. Dictionary lookup improves classification accuracy by 1 character per image. The model is robust, having been trained and tested on the entire text of the book The Wonderful Wizard of Oz, printed in multiple fonts and in both mixed and upper-case letters. It provides a computation-level understanding of word recognition capabihties, in which errors are attributable to the theoretically inevitable difficulties associated with learning to classify large input patterns. The model mimics human capabilities for circumventing some of these difficulties by imposing constraints on fixation positions that reduce image variability.