The seminal dual coding theory by Paivio (1971) posited that non- verbal and verbal stimuli differ in their representational format, whereby the former activates a dual code while the latter only one. These differences in code have implications for tasks such as visual search. The current eye-tracking visual search study aims to re- evaluate this theoretical framework while examining the role played by semantic processing that has never been looked at before. We followed the original design by Paivio and Begg (1974), with participants searching for a target, cued either by a word or a picture, in an array of either words or pictures. The target could be either semantically related or unrelated to the other distractors. Corroborating original results, response times for correct trials were
faster in pictorial arrays and substantially slower when a cued picture had to be found in a word array. Semantically unrelated targets were looked at faster for longer, leading to shorter search responses than semantically related targets. Critically, these effects driven by semantic relatedness were amplified when codes had to be converted (e.g., picture-to-word). Our findings refine our understanding of the role semantic processing plays in the
representational format of words and pictures and the implications it carries for visual search.