Distributed Letter Representations in Visual Word Recognition
- Author(s): Stokes, Ryan
- Advisor(s): Hickok, Greg
- et al.
Visual word identification is a remarkably automatic and flexible process - invariant to size, shape, and location, but also sensitive to letter features and case. There is a general agreement that letters are coded hierarchically, starting with local receptive field contrasts that form the simple feature that comprise letter identity. However, the mechanism that identifies words based on the location of letters, known as letter position coding, is not well understood. Early models that coded precise locations in space cannot account for the minimal disruptions observed for words with letter position adjustments such as transpositions, additions, or deletions. A wealth of experimental evidence supports a view of visual word recognition that is forgiving to local changes in position. We present a model of letter position encoding, as an extension of the Overlap model, that treats letters as distributions along a normalized retinotopic space. A convolutional neural network was used to estimate a pattern of letter identity along overlapping receptive field locations. This method creates an input to models of letter position that are closer in line with behavioral data. Model estimates were found to fit well with a database of priming studies used to investigate the effects of letter position manipulations on participants' reaction time and accuracy. Using model-based functional imaging, we show the model accurately localizes the visual word form area (VWFA), a functional region in the lateral portion of the left ventral occipitotemporal cortex. Finally, we demonstrate the need for a cohesive computational model of visual word recognition by examining the emergent properties of a combined letter identification and letter position network.