Animal reading studies have shown that word/non-word decision behavior can be performed by baboons and pigeons, despite their inability to access phonological and semantic representations. Previous modeling work used different learning models (e.g., deep-learning architectures) to successfully reproduce baboon lexical decisions. More transparent investigations of the implemented representations underlying baboons’ behavior are currently missing, however. Here we apply the highly transparent Speechless Reader Model, which is motivated by human reading and its underlying neurocognitive processes, to existing baboon data. We implemented four variants that comprise different sets of representations—all four models implemented visual-orthographic prediction errors. In addition, one model included prediction errors derived from positional letter frequencies, one prediction errors constrained by specific letter sequences, and finally, a combinatory model combined all three prediction errors. We compared the models’ behavior to that of the baboons and thereby identified one model which most adequately mirrored the animals’ learning success. This model combined the image-based prediction error and the letter-based prediction error that also accounts for the transitional probabilities within the letter sequence. Thus, we can conclude that animals, similarly to humans, use prediction error representations that capture orthographic codes to implement efficient reading-like behavior.